INTERNET AND ITS HISTORY

INTRODUCTION

The globe comprises of a very large size of distance coverage, that even with a sophisticated and a high techs aircraft, human normally used to spend several hours or days to travel from country to another that are located far distance from each other. Imagined human being lives without any technological advancement that could allow us to send a message from one location to another within a very short period of time, similar to that of Internet.

With the invention of Internet, we can be able to executes and run most of the tasks within the comfort of our zone, such as watching of news, shopping online, or booking for an appointment with a doctor. Internet has enabled us to carry out communications and shares resources around the world, wherever we found ourselves as long as there is Internet network within that location. Internet also makes our lives much easier and simpler!

internet interconnection

Internet Logo


Introduction And Definition of Internet

The Internet is a vast collection of computer networks which form and act as a single huge network for transport of data and messages across distances which can be anywhere, from the same office to anywhere around the globe.

The Internet is a global network of billions of computers and other electronic devices. With the Internet, it's possible to access almost any information, communicate with anyone else in the world, and do much more. The evolution of the internet has literally changed the course of history - the ease and speed with which information can be shared globally.

On 24 of October 1995, the Federal Networking Council (FNC) defined Internet as the global information system that:

  • is logically linked together by a globally unique address space based on the Internet Protocol (IP) or its subsequent extensions/follow-ons
  • is able to support communications using the Transmission Control Protocol/Internet Protocol (TCP/IP) suite or its subsequent extensions/follow-ons, and/or other IP-compatible protocols, and
  • provides, uses or makes accessible, either publicly or privately, high level services layered on the communications and related infrastructure described herein.

How Internet Work?

At this point you may be wondering, how does the Internet work? The exact answer is pretty complicated and would take a while to explain. Instead, let's look at some of the most important things you should know. It's important to realize that the Internet is a global network of physical cables, which can include copper telephone wires, TV cables, and fiber optic cables. Even the wireless connections like Wi-Fi and 4G/5G rely on these physical cables to access the Internet.

When you visit a website, your computing device sends a request over these wires to a server; A server is a computer machine where websites and related information are stored. Once the request arrives at the server machine, the server retrieves the website and sends the correct data and information back to your computer as a client computer.

The web browser app available in the client computer then handles back the result of the request made in form of information that comprises of texts, images, audio, or videos and display it on the screen of that client computer. What's amazing is that all this happens within a fraction of seconds! This is just the brief explanation on how the Internet works. Mind you, there are other several services running on Internet apart from the web. But all used similar analogy to send and receive information.


World Wide Web (WWW) – WEB

The World Wide Web - commonly referred to as WWW, W3, or the Web, is an interconnected system of public webpages accessible through the Internet. Internet and World Wide Web are two different words with different meanings: Internet is a global network of networks, while web is one of the several services built on top of the Internet, guides and implemented by http protocol.

The Web today consists of several components, the most basics components include:

  • HTTP:
     The protocol that governs data transfer between a server computer machine and client computer. This protocol laid the foundation for the transportation of data between the Web server hosting computer and how the data is been presented or displayed on the user browser.
  • URL:
    To access a Web component, a client supplies a unique universal identifier, called a URL (uniform resource location) or URI (uniform resource identifier), formally called Universal Document Identifier (UDI).
    Think of URL as the address that uniquely identify your apartment in your house, your house in your street, your street in your city, your city in your state/province, and your state/province in your country with each portion of the URL as different parts of the address, and each giving your different information.
  • Hyperlink:
    Also called a link, is the term used to define the process of linking or connecting resources on a web. It’s the defining concept of the Web, aiding its identity as a collection of connected documents.
    Links don't always necessarily go to another website. In some cases, they also allow for a download of file. When you click a link for this, the file will be downloaded to your device. As you can see, links are very important part of using the Web. They allow us to navigate between different webpages, download files, and do a whole lot more.
  • HTML:
     It stands for hypertext markup language. It is the language used to design the webpage and website. It’s the most common format for publishing web documents on the Internet.
    web logo

    Web Logo


  • Most number of percentage of data and information on the Internet today are being served from the web services. However, some of the data on the web are not publicly accessible, such data that are not accessible by the public are being served by a type of website called Deep web.


    HISTORY OF THE INTERNET

    Internet was as a result of extensive research, experiments, and dedication combined with an applied consistency of a hardworking by different set of engineers and researchers from different fields across the globe. These engineers have sacrificed a lot of resources to see the success of Internet come to fruition. With the result of their tireless efforts, we now have all our tasks on the Internet. Thanks to the inspiring souls of these great engineers and researchers who makes our lives easier today!

    Internet logo

    Internet Logo


    A complete history on the Internet have to cover four distinct areas: Technological evolution of the Internet, Infrastructural aspects, Socialization aspects, and the commercialization aspect. But in this article, it is going to be brief, the explanation will only focus on the Technological Evolution part; which also comprises of World Wide Web (WWW), an electronic mail (Email), and the first graphical web browser software. The different technologies used right from the early stage of the Internet, and how these technologies have been transformed.


    ARPA is the research agency that gave birth to the Internet. But even before the formation of ARPANET, there are some agencies that have been established solely for the purpose of creating technologies that will enable communication among several devices at different locations, and the ability to share resources among the devices.


    Origin Of the Internet - ARPANET

    ARPA stands for Advanced Research Project Agency. It was a Defence Department research project in Computer Science, an agency for scientists and researchers to share information, findings, knowledge, and communicate. It also allowed and helped the field of Computer Science to develop and evolve. It was there that the vision of Joseph Carl Robnett "Lick" Licklider, one of the directors of ARPA, would start to form in the years to come.

    The first recorded description of the social interactions that could be enabled through networking was a series of memos written by J.C.R. Licklider of MIT in August 1962 discussing his “Galactic Network” concept. He envisioned a globally interconnected set of computers through which everyone could quickly access data and programs from any site. In spirit, the concept was very much like the Internet of today.

    ARPANET

    ARPANET


    Licklider also realized that interactive computers could provide more than a library function and could provide great value as automated assistants. He captured his ideas in a seminal paper in 1960 called Man-Computer Symbiosis, in which he described a computer assistant that could answer questions, perform simulation modeling, graphically display results, and extrapolate solutions for new situations from past experience. Licklider foresaw a close symbiotic relationship between computer and human, including sophisticated computerized interfaces with the brain.

    Licklider developed the idea of a universal network, spread his vision throughout the IPTO, and inspired his successors to realize his dream by creation of the ARPANET.

    Although Licklider left ARPA a few years before ARPANET network was created, his ideas and his vision laid the foundation and building blocks to create the Internet. Lawrence Roberts led a team that overlook the design of ARPANET system.

    In 1966, DARPA head Charlie Hertzfeld promised IPTO Director, Bob Taylor a million dollars to build a distributed communications network if he could get it organized. Taylor was greatly impressed by Lawrence Roberts work, and asked him to come on board to lead the effort. Roberts resisted at first, and then joined as ARPA IPTO Chief Scientist in December 1966 when Taylor brought pressure on him through Hertzfeld and his boss at the Lincoln Lab. Roberts then immediately started working on the system design for a wide area digital communications network that would come to be called the ARPANET.

    In April 1967, Roberts held an "ARPANET Design Session" at the IPTO Principal Investigator meeting in Ann Arbor, Michigan. The standards for identification and authentication of users, transmission of characters, and error checking and retransmission procedures were outlined at this meeting, and it was at this meeting that Wesley Clark suggested using a separate minicomputer called the Interface Message Processor to serve as interface to the network.


    First Connected Device(s) on The Internet

    Leonard Kleinrock, one of the pioneers of digital network communications who developed an early packet switch theory, and his focus on analysis, design and measurement resulted in selecting his Network Measurement Center – NMC located at University of California Loss Angeles - UCLA to be the first node (device) on the ARPANET. On a historical day in September 1969, a team of Kleinrock's NMC connected one of their SDS Sigma 7 computers to an Interface Message Processor, thereby becoming the first node on the ARPANET, and the first computer ever on the Internet. Doug Engelbart’s project on “Augmentation of Human Intellect” (which included NLS, an early hypertext system) at Stanford Research Institute (SRI) provided a second node.

    SIGMA SDS 7

    SIGMA SDS 7 COMPUTER


    One month later, when SRI was connected to the ARPANET, the first host-to-host message was sent from Kleinrock’s laboratory to SRI. Two more nodes were added at University of California Santa Barbara (UCSB) and University of Utah. These last two nodes incorporated application visualization projects, with Glen Culler and Burton Fried at UCSB investigating methods for display of mathematical functions using storage displays to deal with the problem of refresh over the net, and Robert Taylor and Ivan Sutherland at Utah investigating methods of 3-D representations over the net. Thus, by the end of 1969, four host computers were connected together into the initial ARPANET, and the budding Internet was off the ground. Even at this early stage, it should be noted that the networking research incorporated both work on the underlying network and work on how to utilize the network. This tradition continues to this day.

    In 1972, Robert Kahn was hired by Lawrence Roberts at the IPTO to work on networking technologies, and in October he gave a demonstration of an ARPANET network connecting 40 different computers at the International Computer Communication Conference (ICCC), making the network widely known for the first time to people from around the world.


    Packet Switching Invention

    Enhancing Data Transportation Technology for the Network:

    Leonard Kleinrock at MIT published the first paper on packet switching theory in July 1961 and the first book on the subject in 1964. Kleinrock convinced Roberts of the theoretical feasibility of communications using packets rather than circuits, which was a major step along the path towards computer networking.

    Until around the end of 1960’s, data was sent via the telephone line using a method called Circuit Switching whenever a task was to be run on a computer. Though, circuit switch method worked just fine for phone calls but was very inefficient for computers and the Internet.

    Using this method, data could only be sent as a full packet. Data can be sent to only one computer at a time over the network. It was common for information to get lost and to have to re-start the whole procedure from the beginning. It was time consuming, ineffective, and costly. And in the Cold War era, it was also dangerous. An attack on the telephone system would destroy the whole communication system.

    The solution to circuit switched problems was to build a distributed Packet Switched Network. It was a simple and efficient method of transferring data. Instead of sending data as one big stream, the data is been cuts up into pieces. Then it breaks down the packets of information into blocks and forwards them as fast as possible and in as many possible directions, each taking its own different routes in the network, until they reach their destination.

    Once the packets are there, they are re-assembled again. That's made possible because each packet has information about the sender, the destination, and a number. This then allows the receiver to put them back together in their original form.

    This method of distributed packet switched network was researched by different scientists, but the ideas of Paul Baran on distributed networks were later adopted by ARPANET.

    Baran began an investigation into development of survivable communications networks, the results of which were first presented to the Air Force in the summer of 1961 as briefing B-265, then as paper P-2626, and then as a series of eleven comprehensive papers titled on Distributed Communications in 1964.

    Baran's study describes a remarkably detailed architecture for a distributed, survivable, packet switched communications network. The network is designed to withstand almost any degree of destruction to individual components without loss of end-to-end communications. Since each computer could be connected to one or more other computers, it was assumed that any link of the network could fail at any time, and the network therefore had no central control or administration. Baran's architecture was well designed to survive a nuclear conflict, and helped to convince the US Military that wide area digital computer networks were a promising technology. Baran also talked to Bob Taylor and J.C.R. Licklider at the IPTO about his work, since they were also working to build a wide area communications network. His 1964 series of papers then influenced Roberts and Kleinrock to adopt the technology for development of the ARPANET network a few years later, laying the groundwork that leads to its continued use today.


    Internet Communication Standards

    The devices we used today are designed in such a way to be connected to the wider global network automatically. But back then, this process was a complex task. Each machine/device used to communicate with a different language, right from the first generation set of computers. To effectively communicate from one machine to another different machine, we need a standard protocol that will guides these machines to understand each other.

    This worldwide infrastructure, the network of networks that we call the Internet, is based on certain agreed upon protocols. These protocols are based on how networks communicate and exchange data.

    The earliest network protocol was called the Network Control Protocol (NCP), but this protocol was used to provides a Host-to-Host support among the same set of connected devices on ARPANET network.

    There comes a problem when these early networks start expanding, since the network control protocol was only used within the ARPANET networks alone. How can the expanded network nodes be able to communicate with one another? There is a needed for the network to expand even more for the vision of a 'global network' to become a reality.

    To build an open network of networks, a general protocol was needed. That is, a set of rules that could makes different set of computing machines to communicate with one language. Those rules had to be strict enough for secure data transfer but also loose enough to accommodate all the ways that data was transferred. This led to a designed of protocol called Transmission Control Protocol/ Internet Protocol - TCP/IP.

    The idea of open-architecture networking was first introduced by Kahn shortly after having arrived at DARPA in 1972. This work was originally part of the packet radio program, but subsequently became a separate program in its own right. At the time, the program was called “Internetting”. Key to making the packet radio system work was a reliable end-to-end protocol that could maintain effective communication in the face of jamming and other radio interference, or withstand intermittent blackout such as caused by being in a tunnel or blocked by the local terrain. Kahn first contemplated developing a protocol local only to the packet radio network, since that would avoid having to deal with the multitude of different operating systems and continuing to use Network Control Protocol (NCP).

    However, NCP did not have the ability to address networks (and machines) further downstream than a destination Interface Message Processor - IMP on the ARPANET and thus some change to NCP would also be required. The assumption was that the ARPANET would not be easily changeable in this regard. NCP relied on ARPANET to provide end-to-end reliability. If any packets were lost, the protocol (and presumably any applications it supported) would crash. In this model, NCP had no end-to-end host error control, since the ARPANET was to be the only network in existence, and it would be so reliable that no error control would be required on the part of the hosts. Thus, Kahn decided to develop a new version of the protocol which could meet the needs of an open-architecture network environment. This protocol would eventually be called the Transmission Control Protocol/Internet Protocol (TCP/IP). While NCP tended to act like a device driver, the new protocol would be more like a communications protocol.

    Four ground rules were critical to Kahn’s early thinking:

    • Network Connectivity: Each distinct network would have to stand on its own and no internal changes could be required to any such network to connect it to the Internet.
    • Error Recovery: Communications would be on a best effort basis. If a packet didn’t make it to the final destination, it would shortly be retransmitted from the source.
    • Black Box design: Black boxes would be used to connect the networks; these would later be called gateways and routers. There would be no information retained by the gateways about the individual flows of packets passing through them, thereby keeping them simple and avoiding complicated adaptation and recovery from various failure modes.
    • Distribution: There would be no global control at the operations level.

    Other key issues that needed to be addressed were:

    • Algorithms to prevent lost packets from permanently disabling communications and enabling them to be successfully retransmitted from the source.
    • Providing for host-to-host “pipelining” so that multiple packets could be enroute from source to destination at the discretion of the participating hosts, if the intermediate networks allowed it.
    • Gateway functions to allow it to forward packets appropriately. This included interpreting IP headers for routing, handling interfaces, breaking packets into smaller pieces if necessary, etc.
    • The need for end-to-end checksums, reassembly of packets from fragments and detection of duplicates, if any.
    • The need for global addressing.
    • Techniques for host-to-host flow control.
    • Interfacing with the various operating systems

    Kahn began work on a communications-oriented set of operating system principles while at BBN and documented some of his early thoughts in an internal BBN memorandum titled “Communications Principles For Operating Systems”. At this point he realized it would be necessary to learn the implementation details of each operating system to have a chance to embed any new protocols in an efficient way. Thus, in the spring of 1973, after starting the internetting effort, Khan asked Vint Cerf, then at Stanford Institute of Research to work with him on the detailed design of the protocol. Cerf had been intimately involved in the original Network Control Protocol (NCP) design and development, and already had the knowledge about interfacing to existing operating systems. So, armed with Kahn’s architectural approach to the communications side and with Cerf’s NCP experience, they teamed up to spell out the details of what became TCP/IP.

    The original Cerf/Kahn paper on the Internet described one protocol, called TCP, which provided all the transport and forwarding services on the Internet. Kahn had intended that the TCP protocol support a range of transport services, from the totally reliable sequenced delivery of data (virtual circuit model) to a datagram service in which the application made direct use of the underlying network service, which might imply occasional lost, corrupted or reordered packets. However, the initial effort to implement TCP resulted in a version that only allowed for virtual circuits. This model worked fine for file transfer and remote login applications, but some of the early work on advanced network applications, in particular packet voice in the 1970s, made clear that in some cases packet losses should not be corrected by TCP, but should be left to the application to deal with. This led to a reorganization of the original TCP into two protocols, the simple IP which provided only for addressing and forwarding of individual packets, and the separate TCP, which was concerned with service features such as flow control and recovery from lost packets. For those applications that did not want the services of TCP, an alternative called the User Datagram Protocol (UDP) was added to provide direct access to the basic service of IP.

    TCP/IP Layers Model

    TCP/IP Layer Model


    First Ever Email Service

    In 1972, Khan successfully demonstrated ARPANET at the International Computer Communication Conference (ICCC) as the first network technology presented to the public. It was also in that same year that the initial latest application, electronic mail, was introduced.

    In March 1972, Ray Tomlinson at BBN wrote the basic email message send and read software application, motivated by the need of the ARPANET developers for an easy coordination mechanism. In July, Roberts added features to the software by writing the first email utility program to list, selectively read file, forward, and respond to messages.

    first email

    First Email


    From there, email services took off as the largest network application for over a decade. This served as a forerunner of the kind of activity we see on the World Wide Web today, namely, the enormous growth of all kinds of “people-to-people” traffic.


    The Invention of World Wide Web – WWW

    The World Wide Web, commonly known as the Web, is an information system enabling documents and other web resources to be accessed over the Internet.

    In 1989, Tim Berners-Lee invented the World Wide Web. Having a background for system design in real-time communications and text processing software development, led Tim to design an internet-based hypermedia initiative for global information sharing while working at CERN, the European Particle Physics Laboratory.

    The first website at CERN – and in the world – was dedicated to the World Wide Web project itself and was hosted on Berners-Lee's NeXT computer.

    Tim also directs the W3 Consortium, an open forum of companies and organizations with the mission to realize the full potential of the Web.

    web logo

    Web


    The web was originally conceived and developed to meet the demand for automated information-sharing between scientists in universities and institutes around the world.

    On 30 April 1993, CERN put the World Wide Web software in the public domain. Later, CERN made a release available with an open licence, a more sure way to maximise its dissemination. These actions allowed the web to flourished.


    The Creation of Graphical User Interface (GUI) Web Browser

    After the invention of the web by Tim Berners-Lee, most of the browsers available then were for Unix machines which were expensive. This meant that the Web was mostly used by academics and engineers who had access to such machines. Also, the user-interfaces of such browsers tended to be not very user-friendly, which also hindered the spread of the Web.

    Marc Andreesen, a student, and part-time assistant at the National Center for Supercomputing Applications (NCSA) at the University of Illinois decided to develop a browser that was easier to use and more graphically rich.

    In 1992, Andreesen recruited fellow NCSA employee, Eric Bina, to help with his project. The two worked tirelessly to create the new browser. They called their new browser Mosaic. It was much more sophisticated graphically than other browsers of the time. Like other browsers, it was designed to display HTML documents, but new formatting tags like "center" were included.

    Another very important feature that made Mosaic to stand out from the previous browsers was the inclusion of the "image" tag which allowed to include images on web pages. Earlier browsers allowed the viewing of pictures, but only as a separate file. Mosaic made it possible for images and text to appear on the same page. Mosaic also supported a graphical interface with clickable buttons that let users navigate easily and controls that let users scroll through text with ease. Another innovative feature was the hyper-link. In earlier browsers, hypertext links had reference numbers that the user typed in to navigate to the linked document. Hyper-links allowed the user to simply click on a link to retrieve a document.

    Mosaic browser

    Mosaic Browser Screenshot


    In early 1993, Mosaic was posted for download on NCSA's servers. It was immediately popular. Within weeks, tens of thousands of people had downloaded the software. The original version was for Unix. Andreesen and Bina quickly put together a team to develop for PC and Mac versions, which were released in the late spring of the same year. With Mosaic now available for more popular platforms, its popularity skyrocketed. More users meant a bigger Web audience. The bigger audiences spurred the creation of new content, which in turn further increased the audience on the Web and so on. As the number of users on the Web increased, the browser of choice was Mosaic, so its distribution increased accordingly.

    By December 1993, Mosaic's growth was so great that it made the front page of the New York Times business section. The article concluded that Mosaic was perhaps "an application program so different and so obviously useful that it can create a new industry from scratch". NCSA administrators were quoted in the article, but there was no mention of either Andreesen or Bina. Andreesen realized that when he was through with his studies, NCSA would take over Mosaic for themselves. So, when he graduated in December 1993, he left and moved to Silicon Valley in California.


    NETSCAPE – The Next GUI Web Browser

    Andreesen settled in Palo Alto, and soon met Jim Clark. Clark had founded Silicon Graphics, Inc. The two began talking about a possible new start-up company. Others were brought into the discussions, and it was decided that they would start an Internet company. Andreesen contacted old friends still working for NCSA and enticed a group of them to come and join the engineering team for the new company. In mid-1994, Mosaic Communications Corp. was officially incorporated in Mountain View, California. Andreesen became the Vice President of Technology for the new company.

    The new team's mandate was to create a product to surpass the original Mosaic. They had to start from scratch. The original had been created on university time with university money and so belonged exclusively to the university. The team worked furiously. One employee recalls, "a lot of times, people were there straight forty-eight hours, just coding. I've never seen anything like it, in terms of honest-to-God, human endurance, to sit in front of a monitor and program. But they were driven by the vision of beating the original Mosaic".

    The new product would need a name. Eventually, the name Netscape was adopted.

    In November of 1998, Netscape was bought by AOL.

    Netscape Browser Snapshot




    The Internet has changed much for more than half of a century since it came into existence. It was conceived in the era of time-sharing, but has survived into the era of personal computers, client-server and peer-to-peer computing, and the network computer. It was designed before LANs existed but has accommodated the new network technologies. It was envisioned as supporting a range of functions from file sharing and remote login to resource sharing and collaboration, and has spawned electronic mail, World Wide Web and more recently the Internet of Things (IoT). But most important, it started as the creation of a small band of dedicated researchers, and has grown to be a commercial success with billions of dollars of annual investment.

    The Internet is something we all use everyday, and many of us can't imagine our lives without it. The internet and all the technological advances it offers, has changed our society. It has changed our jobs, the way we consume news and share information, and the way we communicate with one another. It has also created so many opportunities and has helped humanity progress and has shaped our human experience.

    The Internet, a technology so expansive and ever-changing, was not the work of just one person or institution. Many people contributed to its growth by developing new features.


    We are all now connected by the Internet, like neurons in a giant brain.
    “Stephen Hawking”


    Don't forget to share the article on your social media handles by clicking the Share button so that others can also benefit!





Post a Comment

/*! lazysizes - v5.3.2 | lazyload script*/