These networks were built in order that various types of services could be provided over them. In both cases you can think of them as a request-response architecture – a request for a service is made and a resource is returned in the form of a response. In the physical world think of a logistics company delivering a package over a transportation network, and in the digital world a browser requesting a Webpage. In the case of the transportation network, an e-commerce request is made and a package delivery is fulfilled over the transportation network.
Exploring the World Wide Web
It is no exaggeration to say that the World Wide Web has had as profound an effect on human communication as the printing press. One key difference is that operation of the printing press was limited to a few select tradesmen but on the web everyone has his or her own printing press; everyone can be a publisher of a website. Before creating your first website, you’ll examine a short history of the web because that history impacts the way you write code for your web pages. You’ll start by exploring the basic terminology of computer networks.
A network is a structure in which information and services are shared among devices known as nodes or hosts. A host can be any device that is capable of sending and/or receiving data electronically. The most common hosts that you will work with are desktop computers, laptops, tablets, mobile phones, and printers.A host that provides information or a service to other devices on the network is called a server. For example, a print server is a network host that provides printing services and a file server is a host that provides storage space for saving and retrieving files. The device that receives these services is called a client. A common network design is the client-server network, in which the clients access information provided by one or more servers. You might be using such a network to access your data files for this tutorial.
Networks are classified based on the range of devices they cover. A network confined to a small geographic area, such as within a building or department, is referred to as a local area network or LAN. A network that covers a wider area, such as several buildings or cities, is called a wide area network or WAN. Wide area networks typically consist of two or more interconnected local area networks. The largest WAN in existence is the internet, which incorporates an almost uncountable number of networks and hosts involving computers, mobile devices (such as phones, tablets, and so forth), MP3 players, and gaming systems.
Locating Information on a Network
The biggest obstacle to effectively using the Internet is the network’s sheer scope and size. Most of the early Internet tools required users to master a bewildering array of terms, acronyms, and commands. Because network users had to be well versed in computers and network technology, Internet use was largely limited to programmers and computer specialists working for universities, large businesses, and the government.The solution to this problem was developed in 1989 by Timothy Berners-Lee and other researchers at the CERN nuclear research facility near Geneva, Switzerland.
They needed an information system that would make it easy for their researchers to locate and share data on the CERN network. To meet this need, they developed a system of hypertext documents. Hypertext is a method of organization in which data sources are interconnected through a series of links or hyperlinks that users activate to jump from one data source to another. Hypertext is ideally suited for the Internet because end users don’t need to know where a particular document, information source, or service is located—they only need to know how to activate the link. The effectiveness of this technique quickly spread beyond Geneva and was adopted with other networks across the Internet. The totality of these interconnected hypertext documents became known as the World Wide Web. The fact that the Internet and the World Wide Web are synonymous in many users’ minds is a testament to the success of the hypertext approach.
Web Pages and Web Servers
Documents on the web are stored on web servers in the form of web pages and accessed through a software program called a web browser. The browser retrieves the document from the web server and renders it locally in a form that is readable on a client device. However, because there is a wide selection of client devices ranging from desktop computers to mobile phones to screen readers that relay data aurally, each web page must be written in code that is compatible with every device. How does the same document work with so many different devices? To understand, you need to look at how web pages are created.
A web page is a simple text file written in HtML (Hypertext Markup Language). You’ve already read about hypertext, but what is a markup language? A markup language is a language that describes the content and structure of a document by “marking up” or tagging, different document elements. For example, this tutorial contains several document elements such as the tutorial title, main headings, subheadings, paragraphs, figures, figure captions, and so forth. Using a markup language, each of these elements could be tagged as a distinct item within the “tutorial document.” Thus, a Hypertext Markup Language is a language that supports both the tagging of distinct document elements and connecting documents through hypertext links.
The History of HTML
In the early years, no single organization defined the rules or syntax of HTML. Browser developers were free to define and modify the language in different ways which, of course, led to problems as different browsers supported different “flavors” of HTML and a web page that was written based on one browser’s standard might appear totally different when rendered by another browser. Ultimately, a group of web designers and programmers called the World Wide Web consortium, or the W3c, settled on a set of standards or specifications for all browser manufacturers to follow.
The W3C has no enforcement power, but, because using a uniform language is in everyone’s best interest, the W3C’s recommendations are usually followed, though not always immediately. Each new version of HTML goes through years of discussion and testing before it is formally adopted as the accepted standard. For more information on the W3C and its services, see its website at www.w3.org.By 1999, HTML had progressed to the fourth version of the language, HtML 4.01, which provided support for multimedia, online commerce, and interactive scripts running within the web page.
However, there were still many incompatibilities in how HTML was implemented across different browsers and how HTML code was written by web developers. The W3C sought to take control of what had been a haphazard process and enforce a stricter set of standards in a different version of the language called XHtML (extensible Hypertext Markup Language). By 2002, the W3C had released the specifications for XHTML 1.1. But XHTML 1.1 was intended to be only a minor upgrade on the way to XHTML 2.0, which would correct many of the deficiencies found in HTML 4.01 and become the future language of the web.
One problem was that XHTML 2.0 would not be backward compatible with HTML and, as a result, older websites could not be easily brought into the new standard.Web designers rebelled at this development and, in response, the Web Hypertext Application technology Working Group (WHAtWG) was formed in 2004 with the mission to develop a rival version to XHTML 2.0, called HtML5. Unlike XHTML 2.0, HTML5 would be compatible with earlier versions of HTML and would not apply the same strict standards that XHTML demanded. For several years, it was unclear which specification would win out; but by 2006, work on XHTML 2.0 had completely stalled.
How the World Wide Web works – Google Slides