Google

Saturday, December 16, 2006

Internet Basics

Sometime in the mid 1960’s, during the Cold War, it became apparent that there was a need for a bombproof communications system. A concept was devised to link computers together throughout the country. With such a system in place large sections of the country could be nuked and messages could still get through.

In the beginning, only government “think tanks” and a few universities were linked. Basically the Internet was an emergency military communications system operated by the Department of Defense’s Advanced Research Project Agency (ARPA). The whole operation was referred to as ARPANET.

In time, ARPANET computers were installed at every university in the United States that had defense related funding. Gradually, the Internet had gone from a military pipeline to a communications tool for scientists. As more scholars came online, the administration of the system transferred from ARPA to the National Science Foundation.

Years later, businesses began using the Internet and the administrative responsibilities were once again transferred.

At this time no one party “operates” the Internet, there are several entities that “oversee” the system and the protocols that are involved.

The speed of the Internet has changed the way people receive information. It combines the immediacy of broadcast with the in-depth coverage of newspapers…making it a perfect source for news and weather information.

Internet usage is at an all time high. Almost 100 million U.S. adults are now going online every month, according to New York-based Mediamark Research. That’s half of American adults and a 27 percent increase over 1999 in the number who surf the Web.

There also appears to be a continuing gender shift in the number of American adults going online. In early 2000, Mediamark reported the milestone that women for the first time ever accounted for half of the online adult population. Now 51 percent of U.S. surfers - some 50.6 million - are women.

There are several ways to access the Internet. Learn about the options that are available to you.

How the Internet Works


Computer for the purpose of this example let’s say that you want to send a file to a friend who lives on the opposite side of the country. You select the file that you friend wants and you send it to him via email. Transmission Control Protocol / Internet Protocol (TCP/IP) prepares the data to be sent and received. TCP/IP ensures that a Macintosh network can exchange data with a Windows, or a Unix network, and vice-versa.

The file that you are sending does not travel to your friends computer directly, or even in a single continuous stream. The file you are sending gets broken up into separate data packets. The Internet Protocol side of TCP/IP labels each packet with the unique Internet address, or IP address of your friends computer. Since these packets will travel separate routes, some arriving sooner than others, the Transmission Control Protocol side of TCP/IP assigns a sequence number to each of packets. These sequence numbers will tell the TCP/IP in your friends computer how to reassemble the packets once he receives them. Amazingly, the complicated process of TCP/IP takes place in a matter of milliseconds.

The packets are then sent from one “router” to the next. Each router reads the IP address of the packet and decides which path will be the fastest. Since the traffic on these paths is constantly changing each packet may be sent a different way.

The packets are then sent from one “router” to the next. Each router reads the IP address of the packet and decides which path will be the fastest. Since the traffic on these paths is constantly changing each packet may be sent a different way.

It is possible to discover the paths between routers using a utility known as Traceroute. Using your favorite search engine, type in “traceroute” to find different Web sites hosting it.

Also, check out the Internet Traffic Report to find out how much global Internet traffic there is at this moment…and where the “bottlenecks” are. This information may not useful to you…but it’s interesting! The Internet Traffic Report monitors the flow of data around the world. It then displays a value between zero and 100. Higher values indicate faster and more reliable connections. This information may not useful to you…but it’s interesting! The Internet Traffic Report monitors the flow of data around the world. It then displays a value between zero and 100. Higher values indicate faster and more reliable connections.

Internet Access

AOL logoRegardless if you use the Internet for work or online shopping from home your choice for an Internet Service Provider is important. Your ISP can mean the difference between a great experience and a frustrating one.

There are nearly 7,000 ISPs in the United States alone. Some are massive telecommunications conglomerates with user populations larger than many nations. Others are mom-and-pop operations that know every customer by first name.

How do you decide whether to use a traditional online service or an Internet service provider? Figuring out which is best for you involves asking the right questions, of both yourself and your provider.

The online services connect you to the Internet, so do ISPs. The big difference between the two is “content.” The online services provide proprietary content…and lots of it. Most ISPs provide very little original content, you must venture out yourself (onto the Web, Usenet, ect…) and find it.

You will probably discover that an ISP can provide you with just as good of service, or better, at the same price or less.

Not all ISPs are created equally. Some are very good, some are very bad. Here are some questions that you should ask of any potential ISP before you sign on the bottom line:

What’s the cost? This may not be the most important factor but it’s a good place to start. Most ISPs charge around $20 a month. If you shop around you may find one for around $10 a month. Broadband cable may cost as much as $50 a month.

Do they offer discounts if you prepay the entire year upfront? (This is a good option, providing that it fits into your budget, if you choose a good ISP. It’s a bad option if the ISP turns out to be less than desirable.)

What modem speed do they support? Broadband? DSL? Dial-up? A good ISP will support 56K. You may not have 56K modem yourself but this will provide some indication of the commitment that this ISP is willing to make.

Do they offer a free trial? Try-before-you-buy is always a good thing.

What’s the ratio of modems to users? 6 to 8 users per modem is quite acceptable. Find out what number you would dial in on…and try it a few times.

Does your call through or do you receive busy signals?

How good is the customer support? Some will provide customer support 24 hours a day, 7 days a week…with a “800″ number. Most aren’ quite that good. Call their customer support number a few times before you decide to sign up. Take it as a bad sign if you frequently get a busy signal.

Do they charge a “setup” fee? Some do…most don’t. If you live in a city with many ISPs find one that doesn’t charge you for the privilege of bring your business to them.

There are a few of ways to find Internet Service Providers in your area. We recommend Find An ISP. They list Internet Service Providers by city.

You may also wish to check out The List. They have a large listing of ISPs broken down by area code.

You can also check your local phone book.

Yours truly,
Ferdinand Che.

Internet FAQs and Answers

The following questions are answered in this post.

  • Who invented the Internet?
  • Why was the Internet invented?
  • Who invented the World Wide Web?
  • Why was the World Wide Web invented?
  • When was the World Wide Web invented?
  • Where was the World Wide Web invented?
  • Did the World Wide Web drive the growth of the Internet?
  • What was the first web browser?
  • What was the first web site?
  • What will the Web be like in the future?
Who invented the Internet?

No one person invented the Internet as we know it today. However, certain major figures contributed major breakthroughs:

Leonard Kleinrock was the first to publish a paper about the idea of packet switching, which is essential to the Internet. He did so in 1961. Packet switching is the idea that packets of data can be "routed" from one place to another based on address information carried in the data, much like the address on a letter. Packet switching replaces the older concept of "circuit switching," in which an actual electrical circuit is established all the way from the source to the destination. Circuit switching was the idea behind traditional telephone exchanges.

Why Packet Switching Matters

The big advantage of packet switching: a physical connection can carry packets for many different purposes at the same time, depending on how heavy the traffic is. This is much more efficient than tying up a physical connection for the entire duration of a phone call. And for services like the World Wide Web, where traffic comes in bursts, it's essential.

What if Google needed a separate modem and phone line to talk to every user, like an old-fashioned BBS (Bulletin Board System)? Handling millions of users would be prohibitively expensive.

With packet switching, packets destined for thousands or millions of users can share a single physical connection to the Internet.

J.C.R. Licklider was the first to describe an Internet-like worldwide network of computers, in 1962. He called it the "Galactic Network."

Larry G. Roberts created the first functioning long-distance computer networks in 1965 and designed the Advanced Research Projects Agency Network (ARPANET), the seed from which the modern Internet grew, in 1966.

Bob Kahn and Vint Cerf invented the Transmission Control Protocol (TCP) which moves data on the modern Internet, in 1972 and 1973.

Radia Perlman invented the spanning tree algorithm in the 1980s. Her spanning tree algorithm allows efficient bridging between separate networks. Without a good bridging solution, large-scale networks like the Internet would be impractical.

By 1983, TCP was the standard and ARPANET began to resemble the modern Internet in many respects. The ARPANET itself was taken out of commission in 1990. Most restrictions on commercial Internet traffic ended in 1991, with the last limitations removed in 1995.

For a much more complete history, see the web site of the Internet Society.

Note that the Internet and the World Wide Web are not the same thing. See also: who invented the World Wide Web?, What is the difference between the World Wide Web and the Internet? and See also Hobbes' Internet Timeline for another excellent history of the Internet which includes later important events.

What was the first web site?

The very first web site was nxoc01.cern.ch, and the very first web page was http://nxoc01.cern.ch/hypertext/WWW/TheProject.html. That site shut down a long time ago.

Why was the Internet invented?

The Internet evolved from ARPANET (Advanced Research Projects Agency Network), an effort supported by the United States Department of Defense. The developers of ARPANET wanted to make communication between separate computer systems at various universities and research laboratories more convenient. See also Who invented the Internet? and Wikipedia's ARPANET entry.

Contrary to popular belief, while the Internet was designed to survive the loss of various parts of the network, it was never intended to survive a nuclear war. See the Wikipedia ARPANET entry for more information about this urban legend.

Who invented the World Wide Web?

The World Wide Web was invented by Tim Berners-Lee and Robert Cailliau in 1990. In 1989, while working at CERN (the European Organization for Nuclear Research), both men made proposals for hypertext systems. In 1990 they joined forces and wrote a joint proposal in which the term "World Wide Web" is used for the first time (originally without spaces). And in late 1990 and early 1991, Tim Berners-Lee wrote the first web browser.

Berners-Lee went on to found the World Wide Web Consortium, which seeks to standardize and improve World Wide Web-related things such as the HTML markup language in which web pages are written. Cailliau also made ongoing contributions to the Web. Robert Cailliau's a 1995 speech, "A Short History of the Web," is an excellent resource for those who want to understand the history in more detail.

ITim Berners-Lee invented both the HTML markup language and the HTTP protocol used to request and transmit web pages between web servers and web browsers.

Why was the World Wide Web invented?

According to Tim Berners-Lee, he had a big idea in mind when he and Robert Cailliau invented the Web: a "common information space in which we communicate by sharing information."

However, at the time, Berners-Lee and Cailliau had a more immediate goal: to make it easier for nuclear physics researchers to share information.

Both men worked for the CERN physics research facility and wrote independent proposals for a hypertext system to help researchers communicate. And you can still read Tim Berners-Lee's original proposal to his boss at the time, Mike Sendall.

Berners-Lee and Cailliau joined forces and wrote a joint proposal for the "WorldWideWeb" system, justifying it as a single simple interface to all of the information systems used by researchers at CERN and elsewhere.

When was the World Wide Web invented?

Tim Berners-Lee and Robert Cailliau's official proposal for the World Wide Web is dated November 12th, 1990. This is the first document that actually uses the term.

In 1989, Berners-Lee and Cailliau had separately presented ideas for a hypertext system for their employer, the CERN nuclear physics research facility in Switzerland.

In early 1991 Berners-Lee wrote the first web browser.

Where was the World Wide Web invented?

The first world wide web was invented at CERN (the European Organization for Nuclear Research), in Switzerland.

Did the World Wide Web drive the growth of the Internet?

Yes. Email was already a popular application making inroads into the mainstream before the arrival of of the World Wide Web, and Gopher servers were already beginning to provide a user-friendly means of sharing information. The introduction of web browsers and HTML, however, made Internet publishing accessible to a mass audience and greatly increased demand for Internet access.

The open and free nature of the standards on which the Web is based made it possible for content providers to publish without paying license fees to any one central organization such as America Online, Compuserve or Microsoft. The nonproprietary nature of the Web drove its acceptance by those on the supply side of the equation, in turn generating new demand as new groups of users discovered web sites of interest to them.

What was the first web browser?


Tim Berners-Lee, who invented the World Wide Web together with Robert Cailliau, built the first working prototype in late 1990 and early 1991. That first prototype consisted of a web browser for the NeXTStep operating system. This first web browser, which was named "WorldWideWeb," had a graphical user interface and would be recognizable to most people today as a web browser. However, WorldWideWeb did not support graphics embedded in pages when it was first released.

You can learn more about the original "WorldWideWeb" browser from Tim Berners-Lee himself.

Since WorldWideWeb had a graphical user interface (GUI), it could be called a graphical web browser. However, it did not display web pages with graphics embedded in them That did not happen until the arrival of NCSA Mosaic 2.0.

The first graphical web browser to become truly popular and capture the imagination of the public was NCSA Mosaic. Developed by Marc Andreessen, Jamie Zawinski and others who later went on to create the Netscape browser, NCSA Mosaic was the first to be available for Microsoft Windows, the Macintosh, and the Unix X Window System, which made it possible to bring the web to the average user. The first version appeared in March 1993. The "inline images," such as the boutell.com logo at the top of this page, that are an integral part of almost every web page today were introduced by NCSA Mosaic 2.0, in January of 1994. Mosaic 2.0 also introduced forms.

Netscape is the browser that introduced most all of the remaining major features that define a web browser as we know it. The first version of Netscape appeared in October 1994 under the code name "Mozilla." Netscape 1.0's early beta versions introduced the "progressive rendering" of pages and images, meaning that the page begins to appear and the text can be read even before all of the text and/or images have been completely downloaded. Version 1.1, in March 1995, introduced HTML tables, which are now used in the vast majority of web pages to provide page layout. Version 2.0, in October 1995, introduced frames, Java applets, and JavaScript. Version 2.0 was the last version of Netscape to introduce a major feature of the web as we know it today; later versions improved reliability and stability and introduced features that did not catch on as standards for all browsers. In 1998, Netscape decided to release their browser source code as open source software, and the Mozilla project began.

Microsoft Internet Explorer is by far the most common web browser in use as of this writing. Internet Explorer 1.0, released in August 1995, broke no important new ground in a way that became part of a future standard. Later versions of Internet Explorer quickly caught up; Internet Explorer 3.0 was very close to Netscape 2.0's feature set. In July 1996, Internet Explorer 3.0 beta introduced the first useful implementation of cascading style sheets, which allow better control of the exact appearance of web pages. In April 1997, Internet Explorer 4.0 introduced the first quality implementation of the Document Object Model (DOM), which allows Javascript to modify the appearance and content of a web page after it has been loaded.

What will the Web be like in the future?

If I knew for sure, I'd be out there building it! However, here's a sampling of what I see coming up, in no particular order:

1. Better interactive applications. Web-based applications will get faster, friendlier, and more visually impressive, bcoming able to do things we normally associate with software that comes on a CD. gmail and Google Maps are good examples of how AJAX programming makes web sites more interactive, without forcing the user to wait every time they click a button.

2. Better vector graphics. Although Flash is extremely well-established, Microsoft's Sparkle will challenge Adobe/Macromedia's dominance with superior 3D effects for web pages. However, Sparkle works only with Windows Vista, and Flash works everywhere: Mac, Linux, and old and new Windows computers. SVG, an open standard supported by the W3C industry organization, is also a player here but acceptance of SVG as a Flash alternative has been slow. That may be partly due to its sheer complexity - it's true that Internet Explorer doesn't support it, but even Firefox is still "a long way away" from full SVG support.

In response to the complexity of SVG, the latest versions of both Apple's Safari and the Mozilla Foundation's Firefox support Canvas, a simple way of adding 2D graphics support to JavaScript-enabled web pages. Even though Internet Explorer doesn't support it, the inviting simplicity of Canvas may make it popular with web developers - and if Canvas-only web pages become common, that will drive users to Firefox and Safari... leading Microsoft to do the sensible thing and add Canvas support to Internet Explorer.

3. Open standards for cross-platform video. Unfortunately, right now, Adobe's Flash video format is the only high-quality, low-bandwidth video format that works well across most browsers and operating systems. Since the tools to create Flash video aren't free, there's an opportunity for an open-source solution of similar quality to break in... if users can be convinced to install the player software. Theora is a possible candidate here.

4. Open standards for cross-platform audio. While MP3 is a mostly adequate audio format, it's not really free: Fraunhofer AG charges license fees for the use of MP3-creation software. Ogg Vorbis is a truly open alternative, and some feel it offers superior quality. Again, the big catch is convincing users to install it.

5. Open standards for audio and video control. There are many different players for audio and video, leading to a tangle of different scripting approaches that make it almost impossible for a web designer to offer anything but "play," "pause" and "stop" buttons. Everything else is proprietary or not available to JavaScript at all. Right now, the only way to design an embedded audio player that fits harmoniously into your page design is to design your player in Flash - another closed standard. The time is ripe for a standard set of JavaScript methods, or "verbs," that interact with embedded audio and video players. To "play," "pause" and "stop," we must add "getcurrenttime," "gettotallength," and "setcurrenttime" at a bare minimum. Until that happens, web designers will continue to desert JavaScript in favor of designing media-rich pages in Flash.

6. The "semantic web." Many hope that XML will lead to a Web where web sites can describe their own contents in a way that other programs - not just people - can understand. This leads to useful tools that combine information from many sites. For example....

7. Web service "mash-ups." Many major web sites, such as Amazon and Google, now provide ways to fetch data and use it as part of another site. Amazon, for instance, lets you fetch information about books and use it as part of your own dynamic site design, presumably because it all leads to improved sales. And Google allows both web searches and map displays to be integrated into your own site - under certain terms and conditions. These features are leading to intriguing new applications of the web.

8. XML: important, but not everything. XML is a full-service, overwhelmingly complete way to describe things. But despite the "X" in AJAX, many AJAX applications don't actually rely on XML, because simpler ways of formatting data sent between web browsers and web servers work just fine for many applications. XML will shine primarily as a way of standardizing information that one web site can request from another.

9. Blogging and RSS. Virtually all sites will offer the ability to subscribe to an RSS feed of what's new and interesting on the site. Reading a collected "newspaper" of what's new on your favorite feeds will replace manually visiting web sites every morning... and for many people, it already has.

10. High-quality free content, supported by advertising. Google Adsense and Kontera have made it possible to derive a profit from almost any popular web site - as long as the web site's audience is reading about something that might have a connection to a legitimate product or service.

11. Great stuff from the WHATWG. WHATWG (the Web Hypertext Application Technology working group) is finalizing proposals to improve all web browsers in many ways. Their proposals include Web Forms 2.0, which enhances support for data entry in web pages, Web Applications 1.0, which covers more advanced features such as rich text editing and Canvas 2D graphics, and Web Controls 1.0, which will make it easier to create custom controls in web pages, such as calendars, color selectors, and so on. While Opera and Mozilla/Firefox appear to be the most active participants, the WHATWG had the wisdom to adopt the Canvas feature from Apple's Safari browser as part of Web Applications 1.0, and it is hoped that Microsoft will also participate.

Yours truly,
Ferdinand Che.

History of the Internet

Many people will belief that the Internet was just introduced last decade. No. It was introduced far back in the 50’s though the word “internet” was not put into use during that period.

Below is a Timeline History of the Internet

1957
The USSR launches Sputnik, the first artificial earth satellite. In response, the United States forms the Advanced Research Projects Agency (ARPA) within the Department of Defense (DoD) to establish US lead in science and technology applicable to the military.
Backbones: None - Hosts: None

1962
RAND Paul Baran, of the RAND Corporation (a government agency), was commissioned by the U.S. Air Force to do a study on how it could maintain its command and control over its missiles and bombers, after a nuclear attack. This was to be a military research network that could survive a nuclear strike, decentralized so that if any locations (cities) in the U.S. were attacked, the military could still have control of nuclear arms for a counter-attack.

Baran's finished document described several ways to accomplish this. His final proposal was a packet switched network.

"Packet switching is the breaking down of data into datagrams or packets that are labeled to indicate the origin and the destination of the information and the forwarding of these packets from one computer to another computer until the information arrives at its final destination computer. This was crucial to the realization of a computer network. If packets are lost at any given point, the message can be resent by the originator."
Backbones: None - Hosts: None

1968
ARPA awarded the ARPANET contract to BBN. BBN had selected a Honeywell minicomputer as the base on which they would build the switch. The physical network was constructed in 1969, linking four nodes: University of California at Los Angeles, SRI (in Stanford), University of California at Santa Barbara, and University of Utah. The network was wired together via 50 Kbps circuits.
Backbones: 50Kbps ARPANET - Hosts: 4

1972
The first e-mail program was created by Ray Tomlinson of BBN.

The Advanced Research Projects Agency (ARPA) was renamed The Defense Advanced Research Projects Agency (or DARPA)

ARPANET was currently using the Network Control Protocol or NCP to transfer data. This allowed communications between hosts running on the same network.
Backbones: 50Kbps ARPANET - Hosts: 23

1973
Development began on the protocol later to be called TCP/IP, it was developed by a group headed by Vinton Cerf from Stanford and Bob Kahn from DARPA. This new protocol was to allow diverse computer networks to interconnect and communicate with each other.
Backbones: 50Kbps ARPANET - Hosts: 23+

1974
First Use of term Internet by Vint Cerf and Bob Kahn in paper on Transmission Control Protocol.
Backbones: 50Kbps ARPANET - Hosts: 23+

1976
Dr. Robert M. Metcalfe develops Ethernet, which allowed coaxial cable to move data extremely fast. This was a crucial component to the development of LANs.

The packet satellite project went into practical use. SATNET, Atlantic packet Satellite network, was born. This network linked the United States with Europe.Surprisingly, it used INTELSAT satellites that were owned by a consortium of countries and not exclusively the United States government.

UUCP (Unix-to-Unix CoPy) developed at AT&T Bell Labs and distributed with UNIX one year later.

The Department of Defense began to experiment with the TCP/IP protocol and soon decided to require it for use on ARPANET.
Backbones: 50Kbps ARPANET, plus satellite and radio connections - Hosts: 111+

1979
USENET (the decentralized news group network) was created by Steve Bellovin, a graduate student at University of North Carolina, and programmers Tom Truscott and Jim Ellis. It was based on UUCP.

The Creation of BITNET, by IBM, "Because its Time Network", introduced the "store and forward" network. It was used for email and listservs.
Backbones: 50Kbps ARPANET, plus satellite and radio connections - Hosts: 111+

1981
National Science Foundation created backbone called CSNET 56 Kbps network for institutions without access to ARPANET. Vinton Cerf proposed a plan for an inter-network connection between CSNET and the ARPANET.
Backbones: 50Kbps ARPANET, 56Kbps CSNET, plus satellite and radio connections - Hosts: 213

1983
Internet Activities Board (IAB) was created in 1983.

On January 1st, every machine connected to ARPANET had to use TCP/IP. TCP/IP became the core Internet protocol and replaced NCP entirely.

The University of Wisconsin created Domain Name System (DNS). This allowed packets to be directed to a domain name, which would be translated by the server database into the corresponding IP number. This made it much easier for people to access other servers, because they no longer had to remember numbers.

1984
The ARPANET was divided into two networks: MILNET and ARPANET. MILNET was to serve the needs of the military and ARPANET to support the advanced research component, Department of Defense continued to support both networks.

Upgrade to CSNET was contracted to MCI. New circuits would be T1 lines,1.5 Mbps which is twenty-five times faster than the old 56 Kbps lines. IBM would provide advanced routers and Merit would manage the network. New network was to be called NSFNET (National Science Foundation Network), and old lines were to remain called CSNET.
Backbones: 50Kbps ARPANET, 56Kbps CSNET, plus satellite and radio connections - Hosts: 1024

1985
The National Science Foundation began deploying its new T1 lines, which would be finished by 1988.
Backbones: 50Kbps ARPANET, 56Kbps CSNET, 1.544Mbps (T1) NSFNET, plus satellite and radio connections - Hosts: 1961

1986
The Internet Engineering Task Force or IETF was created to serve as a forum for technical coordination by contractors for DARPA working on ARPANET, US Defense Data Network (DDN), and the Internet core gateway system.
Backbones: 50Kbps ARPANET, 56Kbps CSNET, 1.544Mbps (T1) NSFNET, plus satellite and radio connections - Hosts: 2308

1987
BITNET and CSNET merged to form the Corporation for Research and Educational Networking (CREN), another work of the National Science Foundation.
Backbones: 50Kbps ARPANET, 56Kbps CSNET, 1.544Mbps (T1) NSFNET, plus satellite and radio connections - Hosts: 28,174

1988
Soon after the completion of the T1 NSFNET backbone, traffic increased so quickly that plans immediately began on upgrading the network again.
Backbones: 50Kbps ARPANET, 56Kbps CSNET, 1.544Mbps (T1) NSFNET, plus satellite and radio connections - Hosts: 56,000

1990
(Updated 8/2001) Merit, IBM and MCI formed a not for profit corporation called ANS, Advanced Network & Services, which was to conduct research into high speed networking. It soon came up with the concept of the T3, a 45 Mbps line. NSF quickly adopted the new network and by the end of 1991 all of its sites were connected by this new backbone.

While the T3 lines were being constructed, the Department of Defense disbanded the ARPANET and it was replaced by the NSFNET backbone. The original 50Kbs lines of ARPANET were taken out of service.

Tim Berners-Lee and CERN in Geneva implements a hypertext system to provide efficient information access to the members of the international high-energy physics community.
Backbones: 56Kbps CSNET, 1.544Mbps (T1) NSFNET, plus satellite and radio connections - Hosts: 313,000

1991
CSNET (which consisted of 56Kbps lines) was discontinued having fulfilled its important early role in the provision of academic networking service. A key feature of CREN is that its operational costs are fully met through dues paid by its member organizations.

The NSF established a new network, named NREN, the National Research and Education Network. The purpose of this network is to conduct high speed networking research. It was not to be used as a commercial network, nor was it to be used tosend a lot of the data that the Internet now transfers.
Backbones: Partial 45Mbps (T3) NSFNET, a few private backbones, plus satellite and radio connections - Hosts: 617,000

1992
Internet Society is chartered.

World-Wide Web released by CERN.

NSFNET backbone upgraded to T3 (44.736Mbps)
Backbones: 45Mbps (T3) NSFNET, private interconnected backbones consisting mainly of 56Kbps, 1.544Mbps, plus satellite and radio connections - Hosts: 1,136,000

1993
InterNIC created by NSF to provide specific Internet services: directory and database services (by AT&T), registration services (by Network Solutions Inc.), and information services (by General Atomics/CERFnet).

Marc Andreessen and NCSA and the University of Illinois develops a graphical user interface to the WWW, called "Mosaic for X".
Backbones: 45Mbps (T3) NSFNET, private interconnected backbones consisting mainly of 56Kbps, 1.544Mbps, and 45Mpbs lines, plus satellite and radio connections - Hosts: 2,056,000

1994
No major changes were made to the physical network. The most significant thing that happened was the growth. Many new networks were added to the NSF backbone.Hundreds of thousands of new hosts were added to the INTERNET during this time period.

Pizza Hut offers pizza ordering on its Web page.

First Virtual, the first cyberbank, opens.

ATM (Asynchronous Transmission Mode, 145Mbps) backbone is installed on NSFNET.
Backbones: 145Mbps (ATM) NSFNET, private interconnected backbones consisting mainly of 56Kbps, 1.544Mbps, and 45Mpbs lines, plus satellite and radio connections - Hosts: 3,864,000

1995
The National Science Foundation announced that as of April 30, 1995 it would no longer allow direct access to the NSF backbone. The National Science Foundationcontracted with four companies that would be providers of access to the NSF backbone (Merit). These companies would then sell connections to groups, organizations, and companies.

$50 annual fee is imposed on domains, excluding .edu and .gov domains which are still funded by the National Science Foundation.

1996
Most Internet traffic is carried by backbones of independent ISPs, including MCI, AT&T, Sprint, UUnet, BBN planet, ANS, and more.

Currently the Internet Society, the group that controls the INTERNET, is trying to figure out new TCP/IP to be able to have billions of addresses, rather than the limited system of today. The problem that has arisen is that it is not known how both the old and the new addressing systems will be able to work at the same time during a transition period.

Yours truly,
Ferdinand Che.

Internet vs WWW

DIFFERENCES BETWEEN THE WORLD WIDE WEB AND THE INTERNET

All of the web sites in the world, taken together, make up the World Wide Web. The Internet is the worldwide network of interconnected computers, including both web servers and computers like the one on your desk that run web browser software. The Internet also carries other kinds of network traffic unrelated to the web.

Let's put it even more simply:

The Internet is the actual network. The World Wide Web is something you can do with it. You can do other things with it, too. Playing Quake or sending email both use the Internet but are not the World Wide Web.

Internet Terminology

There are several terms whose use are associated with the internet. To know more about the Internet, it is essential for us to have a definition of its basic terms

WHAT IS THE INTERNET?

"The Internet" refers to the worldwide network of interconnected computers, all of which use a common protocol known as TCP/IP to communicate with each other. Every publicly accessible web site is hosted by a web server computer, which is a part of the Internet. Every personal computer, cell phone or other device that people use to look at web sites is also a part of the Internet. The Internet also makes possible email, games and other applications unrelated to the World Wide Web.

WHAT IS THE WORLD WIDE WEB?


The term "World Wide Web" refers to all of the publicly accessible web sites in the world, in addition to other information sources that web browsers can access. These other sources include FTP sites, USENET newsgroups, and a few surviving Gopher sites. WWW is an acronym, which stands for World Wide Web.

WHAT IS A WEBPAGE?


Every web site is made up of one or more web pages -- like the one you are looking at right now! This text is part of a web page, and is written in the HyperText Markup Language (HTML). In addition to text with hyperlinks, tables, and other formatting, web pages can also contain images. Less commonly, web pages may contain Flash animations, Java applets, or MPEG video files.

WHAT IS A HOME PAGE


The "home page" of a web site is the page that is displayed if you simply type in the domain name of the site in the address bar of your browser and press enter. For instance, when you type in www.cnn.com and press enter in the address bar, you go to CNN's home page. "Home page" can also refer to a page that serves as the table of contents and logical starting point for any collection of web pages, such as the personal web pages of an individual, even if it is not actually the top-level home page for the domain name. Also sometimes referred to as a "homepage."

WHAT IS A URL?

Just imagine you open a website and on the address bar, you find something like http://theinternetismine.blogspot.com/…

This is the Uniform Resource Locator (URL) of the web page you are looking at right now. A URL can be thought of as the "address" of a web page and is sometimes referred to informally as a "web address."

URLs are used to write links linking one page to another; for an example, see the HTML entry.

A URL is made up of several parts. The first part is the protocol, which tells the web browser what sort of server it will be talking to in order to fetch the URL. In this example, the protocol is http.

The remaining parts vary depending on the protocol, but the vast majority of URLs you will encounter use the http protocol; exceptions include file URLs, which link to local files on your own hard drive, ftp URLs, which work just like http URLs but link to things on FTP servers rather than web servers, and mailto URLs, which can be used to invite a user to write an email message to a particular email address.

The second part of the example URL above is the fully qualified domain name of the web site to connect to. In this case, the fully qualified domain name is www.boutell.com. This name identifies the web site containing the page. The term "fully qualified domain name" refers to a complete web site or other computer's name on the Internet. The term "domain name" usually refers only to the last part of the name, in this case boutell.com, which has been registered for that particular company's exclusive use. For more information about registering domain names, see the setting up web sites entry.

The third part of the example URL is the path at which this particular web page is located on the web server. In this case, the path is /newfaq/basic/url.html. Similar to a filename, a path usually indicates where the web page is located within the web space of the web site; in this case it is located in the basic sub-folder of the newfaq folder, which is located in the top-level web page directory of our web site.

WHAT IS A WEB BROWSER?

When you sit down and look at web pages, you are using a web browser. This is the piece of software that communicates with web servers for you via the HTTP protocol, translates HTML pages and image data into a nicely formatted on-screen display, and presents this information to your eyeballs -- or to your other senses, in the case of browsers for the vision-impaired and other alternative interface technologies. Web browsers also appear in simpler devices such as Internet-connected cell phones, like many Nokia models, and PDAs such as the Palm Pilot.

The most common web browser, by a large margin, is Microsoft Internet Explorer, followed by the open-source Mozilla browser and its derivatives, including Netscape 6.0 and later. Apple's new Safari browser is gaining popularity on Macintoshes running MacOS X, and the Opera shareware browser has a loyal following among those who are willing to pay for the fastest browser possible, especially on older computers. The Lynx browser is the most frequently used text-only browser and has been adapted to serve the needs of the vision-impaired.

WHAT IS A WEB SERVER?

Web servers are the computers that actually run web sites. The term "web server" also refers to the piece of software that runs on those computers, accepting HTTP connections from web browsers and delivering web pages and other files to them, as well as processing form submissions. The most common web server software is Apache, followed by Microsoft Internet Information server; many, many other web server programs also exist. For more information about web servers and how to arrange hosting for your own web pages, see the creating web sites section.

WHAT IS A HYPERLINK

Every time you click on a link on a web page, such as the link you may have clicked on to reach this page, you are following a hyperlink.

A hyperlink is a link you can click on or activate with the keyboard or other device in order to go somewhere else. A hyperlink is defined by its function, not by its appearance. What it looks or sounds or smells like is completely irrelevant except as a way of recognizing it. Visually impaired people follow hyperlinks with speech-based browsers and never see text at all. A hyperlink without a blue underline is still a hyperlink if your browser allows you to click on it or otherwise activate it to go somewhere else on the World Wide Web, or in another hypertext system.

WHAT IS HYPERTEXT


Hypertext is text that contains hyperlinks. The HTML and XHTML documents we see on the World Wide Web are the best-known example of a hypertext system, but it is not the only one. Hypertext doesn't necessarily have to include links to documents in other places; a simple hypertext system can live on a single computer, as in the case of Apple's once-common HyperCard application.

WHAT IS A DOMAIN NAME?


The term "domain name" usually refers to a particular organization's registered name on the Internet, such as example.com, boutell.com or udel.edu. There may be many distinct computers within a single domain, or there may be only one. The term "fully qualified domain name" refers to a complete web site or other computer's name on the Internet, such as www.boutell.com or ip2039.cleveland.myisp.com. The holder of a domain name may delegate almost any number of names within that domain, such as www1.example.com, www2.example.com, whimsical.example.com, and so on.

Registered domain names are themselves part of a "top-level domain." Examples of top level domains are .com, .edu, .mx, .fr and so on.

WHAT IS A SEARCH ENGINE?

Since no one is in charge of the Web as a whole, there is a business opportunity for anyone to create an index of its contents and an interface for searching that index. Such interfaces are known as search engines. Typically the user will type in a few words that relate to what he or she is looking for and click a search button, at which point the search engine will present a links to web pages which are, hopefully, relevant to that search.

While some early indexes of the web were created by hand, modern search engines rely on automated exploring, or "spidering," of the web by specialized web browsing programs.

Yours truly,
Ferdinand Che.