Internet for Historians, History of the Internet
The development of the Internet

by R.T. Griffiths


Introduction
How do we explain inventions and innovations?
The Development of Computers
The History of the Internet
 1. The Creation of ARPANET
 2. Time for a Few Basics
 3. From Arpanet to Internet
 4. From Internet to the World Wide Web
 5. The World Wide Web (WWW)


Introduction

This course traces:
- the history and development of the electronic media, including the
- the development of electronic mail, in particular the rise of disc
- the advances in search engines, from pioneers as Archie, Veronica.

By approaching the internet through its history, we should hopefully get a clearer understanding of the basic principles. Even though the internet, and the computers that drive it, seem ultra-modern, we should not view this history without the conceptual frameworks which historians employ to analyses other inventions and innovations. In addition, this course describes the practical steps needed to access this technology, and suggest ways to optimalise the search for historical data and documentation in particular.

How do we explain inventions and innovations?

The internet is an innovation (or rather a series of innovations) that enables communication and transmission of data between computers at different locations. It is an extremely new scientific development, but that does not mean that we cannot analyse it historically, using the concepts that we apply to other innovations in the past. Basically the discussion among historians about the causation of inventions tends to boil down to two main approaches:

- are inventions 'science' led, or
- are they determined by the material environment.

The second approach, that of the material environment, also usually devolves into two sub-questions:

- are invention determined by supply constraints, or
- are they called forth by demand.

To illustrate this, let us turn to the early eighteenth century and the start of the industrial revolution.

The 'science school' faces a difficulty that many of the early inventions were not really scientifically based. The earliest textile machinery were made of largely of wood and depended for their success on various combinations of levels, pulleys and spindles. The steam-engine was admittedly more intricate, but it was not until the early 19th century that its principles were correctly described. And the series of anonymous advances in the size and shape of blast furnaces, their linings and the various mixes of fuels and ores all took place within the existing corpus of knowledge among iron-masters themselves. Nonetheless, the science school emphasized the growth of learned societies, the rise of a new environment of experimentation and the growth of a corpus of experts working in these new areas (and the over-representation of non-conformist Scots among them is explained by the fact that because of their religion they were excluded from higher echelons of traditional careers). They emphasise the rise of a 'scientific method' rather than the role of formal science... the asking of new questions, the methodological pursuit of experiments and the insistence on scientific measurement.

The material school that emphasises supply constraints have a much easier time of it. The earliest textile innovations came in weaving which created a bottleneck in the spinning of yarn. This, in its turn, led to a cluster of innovations in spinning (and later the introduction of steam-engines) which created a new bottleneck in the weaving sector. And this led, in the 1830s to the invention, and continuous improvement, of power-looms. The first steam-engines were pumping engines (water and air) introduced because the need for fuel meant the sinking of ever deeper mine shafts which had to be kept ventilated and unflooded. They were highly inefficient in terms of fuel consumption, which didn't really matter as long as they were situated at mine-heads. It was only when their efficiency was improved and adapted for a rotary motion, that they could be applied elsewhere. The experimentation in blast-furnaces was equally dictated fuel shortages (caused by the disappearance of forests) and exhaustion of supplies of rich iron-ore. Thus, the argument goes, in each case shortages and bottlenecks drove up the prices of critical materials and stimulated the search for improvements. Before leaving the 'supply school' it is worth adding a caveat - some of these inventions depend on innovations elsewhere before they can be effectuated. For example, it is nice to invent a steam engine, but if you cannot make pistons and shafts to the right tolerances, or cannot make sheet metal of equal consistency or do not have appropriate welding techniques, you cannot build it - the works will jam and the boiler will buckle and explode.

The 'demand-siders' would argue for the importance of new markets. They would point to the population growth and urbanisation in the United Kingdom in this period, to improvements in transportation and to development of overseas markets. This works best, of course, when dealing with consumer goods. For textiles they would point to the greater number of British inhabitants to clothe and the reduced opportunities for household production. They would observe the shift in preferences from woolen textiles to cotton and to lighter fabrics in general and they would certainly emphasise the coincidence of the development of spinning technology and the introduction of steam with the fact that Napoleonic Wars had given the British not only unfettered access to their own colonial markets, but those of France and the Netherlands as well.

The Industrial Revolution illustrates the problem for the historical analysis of inventions. Someone has to have the idea and unless we are going to be satisfied with a 'heroic inventors' school of explanation, we will be driven to searching for the social and scientific context. Yet an invention has to be feasible and it has to be applied if it is to have any effect. This means that it has to be worthwhile either because it allows one to do something better or cheaper (supply-side) or because it allows one to make more of something, or something entirely new (demand-side). It is impossible to isolate one factor, though the balance in the explanation varies according to the circumstance. While the supply-side explanation seems to have the stronger claims to 'importance' in the so-called 'First Industrial Revolution', this is far less in the 'Second Industrial Revolution', dating from the 1870s, based on chemicals and electronics and associated with not only with new production processes but with a whole new range of consumer goods, from motor vehicles to consumer durables, associated with higher real incomes.


The Development of Computers

The Internet is a system for allowing computers to communicate with each other. it goes without saying, that before we get the Internet we have to have computers. Much of the information for this section was derived from The Virtual Computing History Museum

The first step towards the modern computer was Samuel Morse's invention in 1844 of communication using electronic impulses, a key and a special code that sequences of pulses to letters of the alphabet. We won't get bogged down in whether Morse was actually the inventor of the telegraph (or his partner Alfred Vail in 1837) since you can find all you ever wanted to know about the topic at The Telegraph Office page.

The next step is to link this particular invention to another of man's perennial strivings, the creation of a calculating machine. Although calculators have existed since the wire and bead abacus was first discovered in Egypt around 500 BC, one could say that the first main step towards the modern computer was Charles Babbage's experiments in the 1820s-1840s to build a "Difference Engine". I used the words 'one could say' deliberately because small errors in his calculations meant that Babbage never actually managed to build his engine - the Science Museum in Kensington built a copy of the Difference Engine in 1991 to celebrate the bi-centenary of his birth. The idea of digital calculation was taken a step further by Herman Hollerith who developed digital processing machines to assist in compiling the 1890 US Census. Hollerith went on to found the Calculating-Tabulating-Recording (C-T-C) company in 1914, a company renamed IBM (International Business Machines) in 1924. Babbage's and Hollerith's ideas for digital computing, however, seemed to have led to a dead-end, with most scientists preferring to develop techniques for analog devices, based on slide-rule principles. These, too, could get pretty big as this Differential Analyzer built at the Massachusetts Institute of Technology (MIT) in 1930 reveals. But machines of this size were also running up against the frontiers of their capabilities and, on the eve of the 1930s, new interest was being shown in digital devices. By now a whole host of devices associated with the development of the telephone (switches, relays etc) and radio (cathode tubes) would extend the possibilities of any solution. But what accelerated developments was the outbreak of World War II.

The war produced two major bottlenecks that were solved by digital machines. In the US, the need for gun-firing tables, navigational tables and tracking and aiming devices for anti-aircraft guns resulted in 1944 in the development of the first large scale automatic electromechanical calculator, the Havard Mark I built by IBM. Note that it did not have an inbuilt program, the operating instructions were driven by a paper tape. A second crying need was to break the German (and Japanese) codes quickly enough to be useful. This work was undertaken by British scientists at Bletchley, and it culminated in the construction of the Colossus which became operational in 1944. This was more advanced than the Harvard Mark I, but its subsequent impact was limited by the fact that its very existence was a classified secret until 1970.

The War had produced a considerable advance in design technology, but basically we were still at the stage of large and complex calculating machines. The challenge was to produce a device with an internal stored memory, a leap that would take us from calculators to computers proper. The war had also created a pool of scientists with experience in digital computing, and work in advancing technology advanced rapidly on both sides of the Atlantic. If we are looking for the first modern computer, the credit should go to the Manchester University whose prototype, Baby, became operational in June 1948, followed soon by a full scale operational model, Manchester Mark I. The next major step, the incorporation of a Random Access Memory came three years later with the Whirlwind constructed at MIT. Until now computer advances had been developed either for various branches of government or as prototype units within universities. In 1951 Remington-Rand entered the market with the UNIVAC computer, largely in an effort to recoup the cost over-run in its contract with the US government which had originally ordered the device for the census. A year later, it started producing ready-made software (although the term did not come into use until a decade later). IBM, which had previously specialised in punch-card systems, entered the market with its 700 series in 1953. Offering 60 per cent discount for educational uses, IBM quickly came to dominate the university market. Computers were now spreading quickly through the business and scientific communities, becoming ever faster and ever more user-friendly. They were also becoming smaller. By the end of the 1950s, transistors were beginning to oust cumbersome vacuum tubes and in 1958/59, the first 'integrated circuit' on a piece of silicon produced - five components on a piece 1 cm long. The 'chip' is born and entered into commercial production in 1961.

In 1961 IBM introduced a 'Compatible Time Sharing System' into its 7090/94 series which allowed separate terminals in different offices to access the same hardware. The concept of "remote access" to a "host" computer had become reality. And if you could link to one computer from a desktop terminal, why not to another.... why not to all?


The History of the Internet

For the information contained in this section I relied heavily on the following sites:
An Atlas of Cyberspace
A Brief History of the Internet
History of Internet
Hobbes' Internet Timeline


1. The Creation of ARPANET

To get to the origins of the Internet, we have to go back in time to 1957. You probably have no cause to remember, but it was International Geophysical Year, a year dedicated to gathering information about the upper atmosphere during a period of intense solar activity. Eisenhower announced in 1955 that, as part of the activities, the USA hoped to launch a small Earth orbiting satellite. The Kremlin announced that it hoped to do likewise. Planning in America focussed on a sophisticated three stage rocket, but in Russia they took a more direct approach. Strapping four military rockets together, on 4 October 1957 the USSR launched Sputnik I (a 70 kgs bleeping sphere the size of a medicine ball) into Earth orbit. The effect in the United States was electrifying, since it seemed overnight to wipe out the feeling on invulnerability the country had enjoyed since the explosion of the first nuclear bomb thirteen years before. One of the immediate reactions was the creation of the Advanced Research Projects Agency within the Ministry of Defence. Its mission was to apply state-of-the-art technology to US defence and to avoid being surprised (again!) by technological advances of the enemy. It was also given interim control of the US satellite program until the creation of NASA in October 1958.

ARPA became the technological think-tank of the American defence effort, employing directly a couple of hundred top scientists and with a budget sufficient for sub-contracting research to other top American institutions. Although the advanced computing would come to dominate its work, the initial focus of ARPA's activities were on space, ballistic missiles and nuclear test monitoring. Even so, from the start ARPA was interested in communicating between its operational base and its sub-contractors, preferably through direct links between its various computers.

In 1962 ARPA opened a computer research program and appointed to its head an MIT scientist John Licklider to lead it. Licklider had just published his first memorandum on the "Galactic Network" concept... a futuristic vision where computers would be networked together and would be accessible to everyone. Within ARPA, Leonard Klienrock was already developing ideas for sending information by breaking a message up into 'packages', sending them separately to their destination and reassembling them at the other end. This would give more flexibility than opening one line and sending the information through that alone. For example, the system would not be reliant on a single routing and, if files were broken-up before transfer, it would be more difficult to eavesdrop... both useful security advantages. The inadequacy of the telephone network for running programs and transferring data was revealed in 1965 when, as an experiment, computers in Berkeley and MIT were linked over a low speed dial-up telephone-line to become the first "wide area network" (WAN) ever created.

By 1966/67 research had developed sufficiently for the new head of computer research, Leonard Roberts, to publish a plan for computer network system called ARPANET. When these plans were published it became clear that independently of each other, and in ignorance of each others's work, teams at MIT, the National Physics Laboratory (UK) and by RAND Corporation had all been working on the feasibility of wide area networks, and their best ideas were incorporated into the ARPANET design. The final requirement was to design a protocol to allow the computers to send and receive messages and data, known as an interface message processor (IMPs). Work on this was completed in 1968, and the time was ready to put the theory to the test. In October 1969, IMPs installed in computers at both UCLA and Stanford. UCLA students would 'login' to Stanford's computer, access its databases and try to send data. The experiment was successful and the fledgling network had come into being. By December 1969 APRANET comprised four host computers as with the addition of research centres in Santa Barbara and Utah. In the months that followed, scientists worked on refining the software that would expand the network's capabilities. At the same time, ever more computers were linked to the net. By December 1971 ARPANET linked 23 host computers to each other.


2. Time for a Few Basics

Here we have the first true computer network. Since it is all still fairly basic, it is worth considering the underlying principles have basically remained the same (even if they, mercifully, operate far faster and look much prettier). We start off with a passive terminal and an active host, a keyboard and a computer. They are linked together by a cable. By typing in commands recognised by a computer, you can use the programs stored in its computer, access its files (and modify them and print them out as desired). Most people can envisage this arrangement within a single building, or complex of buildings.

In order to access another computer, at a completely different facility, we have first to reach it. This was usually done in these times over a (high speed) telephone line (or lines). Once you arrive at the new 'host' you have to convince it to treat you in the same way as someone behind a terminal within its own system. Hence the need of an interface message processor (IMP) and for the same IMP to be installed in both computers! Now you can access its files. Of course, order to preserve confidentiality, all computers differentiated between 'open' files and those that were password protected.

If you wanted to transfer a file or program to your own computer, the host computer uses a program to break it down into 'packages' attaching to each the address and its original position. It then sends them to your 'home' computer where a mirror program reassembles the message in the original order. In future, you could then access them from your home base. When dealing with a 'simple' network like ARPANET it is difficult to see what the real advantage of this process was. But this would soon change...


3. From Arpanet to Internet

In October 1972 ARPANET went 'public'. At the First International Conference on Computers and Communication, held in Washington DC, ARPA scientists demonstrated the system in operation, linking computers together from 40 different locations. This stimulated further research in scientific community throughout the Western World. Soon other networks would appear. The Washington conference also set up an Internetworking Working Group (IWG) to coordinate the research taking place. Meanwhile ARPA scientists worked on refining the system and expanding its capabilities:

- In 1972, they successfully employed a new program to allow the sending of messages over the net, allowing direct person-to-person communication that we now refer to as e-mail. This development we will deal with at length in the next section.
- Also in the early 70s, scientists developed host-to-host protocols. Before then the system only allowed a 'remote terminal' to access the files of each separate host. The new protocols allowed access to the hosts's programs (effectively merging the two host computers into one, for the duration of the link).
- In 1974, ARPA scientists, working closely with experts in Stanford, developed a common language that would allow different networks to communicate with each other. This was known as a transmission control protocol/internet protocol (TCP/IP).

The development of TCP/IP marked a crucial stage in networking development, and it is important to reflect on the implications inherent in the design concepts... since it could all have turned out very differently. One crucial concept was that the system should have an 'open architecture', in fact implementing Licklider's original idea of a "Galactic Network":

- Each network should be able to work on its own, developing its own applications without restraint and requiring no modification to participate in the Internet.
- Within each network there would be a 'gateway', which would link it to the 'outside world'. This would be a larger computer (in order to handle the volume of traffic) with the necessary software to transmit and redirect any 'packages'.
- This gateway software would retain no information about the traffic passing through. This was designed to cut-down workload and to speed up the traffic, but it also remove a possible means of censorship and control.
- Packages would be routed through the fastest available route. If one computer was blocked or slow, the packages would be rerouted through the new until they eventually reached their destination.
- The gateways between the networks would always be open, and they would route the traffic without discrimination.
- Also implicit in the development was that the operating principles would be freely available to all the networks.

This freeing of design information was an early an integral part of the research environment, and greatly facilitated subsequent technological advance of the It is worth remembering, at this stage, that we are still in a World where we are talking almost exclusively about large mainframe computers (owned only by large corporations, government institutions and universities). The system was therefore designed with the expectation that it would work through a limited number of national (sub-) networks. Although 1974 marked the beginning of TCP/IP, it would take several years of modification and redesign before it was competed and universally adopted. One adaptation, for example, was that already in mid-1970s, a stripped-down version was designed that could be incorporated into the new micro-computers that were being developed. A second design challenge was to develop a version of the software that was compatible with each of the computer networks (including that of ARPANET itself!)

Meanwhile computer networking developed apace. In 1974 Stanford opened up Telenet, the first openly accessible public 'packet data service' (a commercial version of ARPANET). In the 1970s the US Department of Energy established MFENet for researchers into Magnetic Fusion Energy, which spawned HEPNet devoted to High Energy Physics. This inspired NASA physicists to establish SPAN for space physicists. In 1976 a Unix-to-Unix protocol was developed by AT&T Bell laboratories and was freely distributed to all Unix computer users (since Unix was the main operating system employed by universities, this opened up networking to the broader academic community). In 1979 Usenet was established, an open system focussing on e-mail communication and devoted to 'newsgroups' is opened, and still thriving today. In 1981 Bitnet (Because it's Time..) was developed City University New York to link university scientists using IBM computers, regardless of discipline, in the Eastern US. CSNet, funded by the US national Science Foundation was established to facilitate communication for Computer Scientists in universities, industry and government. In 1982 a European version of the Unix network, Eunet, was established.

.. linking networks in the UK, Scandinavia and the Netherlands, followed in 1984 by a European version of Bitnet, known as EARN (European Academic and Research Network).

Throughout this period, the world is still fairly chaotic, with a plethora of competing techniques and protocols. ARPANET is still the backbone to the entire system. When, in 1982 it finally adopts the TCP/IP the Internet is born... a connected set of networks using the TCP/IP standard.


4. From Internet to the World Wide Web

So far, the net's development was almost entirely 'science-led'. All this time, however, we must remember that parallel advances in computer capacities and speeds (not to mention the introduction of glass-fibre cables into communications networks) were enabling the system to expand. This expansion, in its turn, started to produce supply constraints, which stimulated further advances. By the early 1980s, when the internet proper started operation, it was already beginning to face problems created by its own success. First, there were more computer 'hosts' linked to the net than had originally been envisaged (in 1984 the number of hosts topped 1000 for the first time) and, second, the volume of traffic per host was much larger (mainly because of the phenomenal success of e-mail). Increasingly predictions were voiced that the entire system would eventually grind to a halt.

One early, and essential development, was the introduction in 1984 of Domain Name Servers (DNS). Until then each host computer had been assigned a name, and there was a single integrated list of names and addresses that could easily be consulted. The new system introduced some tiering into US internet addresses such as edu. (educational), com. (commercial), gov. (governmental) in addition to org. (international organisation) and a series of country codes. This made the names of host computers easier to remember (eg. our own address www.leidenuniv.nl), but the system is even cleverer because when we type in these addresses, the computer is sending/receiving a coded sequence of numbers as 132.229.XX.XX (which the address of the Leiden University computer).

A second development was the decision by national governments to encourage the use of the internet throughout the higher educational system, regardless of discipline. In 1984 the British government announced the construction of JANET (Joint Academic Network) to serve British universities, but more important was the decision, the following year, of the US National Science Foundation to establish NSFNet for the same purpose (one explicit requirement for receiving funding was that access had to be for "all qualified users on campus"). The American program involved a number of decisions that were crucial for the further development of the Internet:
- The use of TCP/IP protocols was mandatory for all participants in the program
- Federal Agencies would share the cost of establishing common infrastructures (as trans-oceanic connections) and support the gateways
- NSFNet signed shared infrastructure 'no-metered-cost' agreements with other scientific networks (including ARPANET) which formed the model for all subsequent agreements.
- It threw its support behind the 'Internet Activities Board' (the direct descendent of the Internetworking Working Group established back in 1972) and encouraged international cooperation in further research.
- Finally, NSFNet agreed to provide the 'backbone' for the US Internet service, and provided five 'supercomputers' to service the envisaged traffic. The first computers provided a network capacity of 56,000 bytes per second but the capacity was upgraded in 1988 to 1,544,000,000 bytes per second. There was one proviso.... this facility excluded "purposes not in support of research and education".

The effect of the creation of NSFNet was dramatic. In the first place it broke the capacity bottleneck in the system. Secondly, it encouraged a surge in Internet use. It had taken a decade for the number of computer hosts attached to 'the Net' to top the thousand mark. By 1986 the number of hosts had reached 5000 and a year later the figure had climbed to hosts 28,000. Thirdly, the exclusion of commercial users from the back-bone had had the (intended) consequence of encouraging the development of private Internet providers.

The exclusion of commercial users from the backbone did not mean that their interests had been neglected. For several years, hardware and software suppliers had been adding TCP/IP to their product packages, but they had little experience in how the products were supposed to work and therefore experienced difficulties in adapting it to their own needs. Part of the force behind the Internet's early development had been the open availability of information (since 1969 most of the key research memoranda, and the discussions they had generated, had been archived in downloadable on-line-files) but now the Internet Activities Board went a step further. In 1985 it organised the first workshop, specifically targeting the private sector, to discuss the potentials (and current limitations) of TCP/IP protocols... beginning a dialogue between government/academic scientists and the private sector, and among private entrepreneurs themselves (who, from the beginning were thus able to ensure the interoperability of their products). In 1987 the first subscription based commercial internet company, UUNET was founded. Others follow. At this stage, the Internet is still quite a forbidding place for the uninitiated. Access commands to find data range from the complicated to the impenetrable, the documentation available is mostly (highly) scientific and the presentation unattractive (courier script, no colour), finding stuff is a pain in the neck and transfer times are relatively slow). The main attractions for the commercial sector are the e-mail facilities and access to e-mail, newsgroups, 'chat' facilities and computer games.

Although commercial exploitation of the net had started, the expansion of the Internet continued to be driven by the government and academic communities. It was also becoming ever more international. By 1989 the number of hosts surpassed 100,000 for the first time and had climbed to 300,000 a year later. The end of the 1980s and the start of the 1990s provide a convenient cut-off point for several reasons:

.
- In 1990 ARPANET (which had been stripped of its military research functions in 1983) became a victim of its own success. The network had been reduced to a pale shadow of its former self and was wound up.
- In 1990, the first Internet search-engine for finding and retrieving computer files, Archie, was developed at McGill University, Montreal. The development of search-engines will be dealt with in the last lecture
- In 1991, the NSF removed its restriction on private access to its backbone computers
- "Information superhighway" project came into being. This was the name given to popularise Al Gore's High Performance Computing Act which provided funds for further research into computing and improving the infrastructure of the Internet's (US) structure. Its largest provisions from 1992-96 were $1,500 mln for the NSF, $600 mln for NASA and $660 for the Department of Energy.
- And in 1991 the World Wide Web was released to the public and, on a personal note, Richard T. Griffiths (famous for his phrase 'a user friendly interface is a secretary') got kicked into Word Perfect and was launched into cyber-space.


5. The World Wide Web (WWW)

The World Wide Web is a network of sites that can be searched and retrieved by a special protocol known as a Hypertext Transfer protocol (HTTP). The protocol simplified the writing of addresses and automatically searched the internet for the address indicated and automatically called up the document for viewing. The WWW concept was designed in 1989 by Tim Berners-Lee and scientists at CERN (Geneva), the European centre for High Energy Physics, who were interested in making easier to retrieve research documentation. A year later he had developed a 'browser/editor' program and had coined the name World Wide Web as a name for the program. The program is released free on an ftp site. This doesn't sound very dramatic but anyone used to the hassle of getting documents previously will testify that it represented a major leap forward. Once the entire dial- and retrieve-language had been simplified, the next step was to design an improved 'browser', a system which allowed the links to be hidden behind text (using a Hypertext Markup Language, HTML) and activated by a click with the 'mouse'. To get an idea of the 'state-of-the-art' in browser technology by 1992 go here (the earlier version would not have supported colour, and the logos and diagrams would have been in separate windows as well). Until that occurred the transition to the new system was slow. By the end of 1992 there were only 50 web-sites in the World and a year later the number was still no more than 150.

What is the difference between the Internet as it has then existed and the Web? Tim Berners-Lee was often asked the same question and gave the following answer:

"The Internet ('Net) is a network of networks. Basically it is made from computers and cables. What Vint Cerf and Bob Khan did was to figure out how this could be used to send around little "packets" of information. As Vint points out, a packet is a bit like a postcard with a simple address on it. If you put the right address on a packet, and gave it to any computer which is connected as part of the Net, each computer would figure out which cable to send it down next so that it would get to its destination. That's what the Internet does. It delivers packets - anywhere in the world, normally well under a second.

Lots of different sort of programs use the Internet: electronic mail, for example, was around long before the global hypertext system I invented and called the World Wide Web ('Web). Now, videoconferencing and streamed audio channels are among other things which, like the Web, encode information in different ways and use different languages between computers ("protocols") to do provide a service.

The Web is an abstract (imaginary) space of information. On the Net, you find computers -- on the Web, you find document, sounds, videos,.... information. On the Net, the connections are cables between computers; on the Web, connections are hypertext links. The Web exists because of programs which communicate between computers on the Net. The Web could not be without the Net. The Web made the net useful because people are really interested in information (not to mention knowledge and wisdom!) and don't really want to have know about computers and cables."

In 1993 Mark Andreesen of NCSA (National Center for SuperComputing Applications, Illinois) launched Mosaic X. It was easy to install, easy to use and, significantly, backed by 24-hour customer support. It also enormously improved the graphic capabilities (by using 'in-line imaging' instead of separate boxes) and installed many of the features that are familiar to you through the browsers which are using to view these pages such as Netscape (which is the successor company established by Andreesen to exploit Mosaic) and Bill Gates' Internet Explorer. Like so many other Internet innovations, trial versions of Mosaic were made available free to the educational community. Mosaic soon became a runaway hit. By 1994 tens of thousands of versions had been installed on computers throughout the World. The potential of HTML to create graphically attractive web-sites and the ease with which these sites could be accessed through the new generations of web-browsers opened the Web to whole new groups. Until now, the Web had served two main communities - the scientific community (accessing on-line documentation) and a wider 'netizens' (net citizens) community (accessing e-mail and news-group facilities). Now commercial web-sites began their proliferation, followed at a short distance by local school/club/family sites. These developments were accelerated by the appearance of ever-more powerful (and cheap) personal computers (which increased both the number of netizens and the potential market for businesses) and by the increase in capacity of the communications infrastructure. The Web now exploded.

In 1994 there were 3,2 mln hosts and 3,000 web-sites. Twelve months later the number of hosts had doubled and the number of web-sites had climbed to 25,000. By the end of the next year the number of host computers had doubled again, and the number of web-sites had increased by more than ten-fold. In that year, by the way, the History Department of Leiden University established its own web presence, placing its site among the first 5 per cent of web-sites ever constructed. The following year we started the course 'Internet for Historians' and, within the sections Economic and Social History, we began the development of 'course-based' web-sites. This all took place in 1997, by which time the number of host computers integrated into the Web had reached 19,5 mln hosts, and the number of web-sites had shot up to 1,2 million. By the last count, in 1998, the number of hosts stood at 36,8 million and the number of web-sites had reached 4,2 million.


Up


Author: Prof. dr. R.T. Griffiths
Last Updated: 3-9-1999