A Brief History of Linux

History provides a common body of reference from which new endevours can be launched. In launching the Bozeman Linux Users Group it was felt that visiting how we got here would have some merit. The currents and channels of human progress are manifold. Herein I choose to describe a very narrow path that leads to the development of Linux while trying to get back to the roots so that an accurate historical propective may be gained. Please keep in mind that vast realms of computing history were left out in order to tell this story in the space and time allotted.

Linux is a trademark owned by Linux Torvalds, the creator.

Linux is an operating system.

Technically, Linux refers to the kernel of what is commonly referred to as the Linux Operating System. Modern operating systems are quite complex and involve many different tools and utilities. Generally, the usage of the term "Linux" in these writings will be in reference to the operating system as a whole. It will be clear from context or the use of the term "kernel" when dealing with that core package. Operating systems in general may be referred to by the acronym "OS".

To understand the role of the operating system it is useful to review how computers were used when the need for OS was first recognized. In the early 1950s, when the first commercial computers came on the market, a program had to include all the commands to control the computer as well as calculate the desired data. This was referred to as a "batch job". The computer operator set up the computer to run each program. When a program was done the operator would unload the computor and then load the next program. A very time consuming process during which the very expensive computers sat idle until the next program was ready.

What is credited as being the first operating system it the GM-NAA I/O developed by Owen at General Motors and Mock at North American Aviation.

In 1953 at the Eastern Joint Computer Conference several users of the IBM 701 met and discussed the need for some way to streamline the scheduling of computer programs. This led to the first operating system, a batch processing monitor program for the 701 written by programmers at the General Motors Research Center in 1955. The electro- static memory of the 701 was not very reliable, so IBM brought out the core memory 704 as a replacement. The monitor was rewritten for the 704 as a joint effort by General Motors and North American Aviation in 1956. Their GM/NAA I/O System was further developed by the SHARE user group in 1960 as the SHARE Operating System (SOS). Finally, IBM took over support of it, and, with the name changed to IBSYS, it was widely used on the transistor (second-generation) IBM 7090 and 7094 computers. It was not, however, the only operating system for this family of computers. IBM had also developed a FORTRAN monitor system (FMS), Bell Labs wrote a monitor called BELLMON, and the University of Michigan Executive System (UMES) was used at many universities.
Unisys History Newsletter V1#3

In the History of UNIX Ronda Hauben write the following:

Describing the situation facing the (Bell) Labs, Victor Vyssotsky, who had been involved the techanical head of the Multics project at Bell Labs and later Executive Director of Research in the Information Systems Division of AT&T Bell Labs, explains, " We just couldn't take the time to get them on and off the machine manually. We needed an operating system to sequence jobs through and control machine resources." (from "Putting Unix in Perspective", Interview with Victor Vyssotsky, by Ned Pierce, in Unix Review, Jan. 1985, pg. 59)

The BESYS operating system was created at Bell Labs to deal with their inhouse needs. When asked by others outside the labs to make a copy available, they did so but with no obligation to provide support. "There was no support when we shipped a BESYS tape to somebody," Vyssotsky recalls, "we would answer reasonable questions over the telephone. If they found troubles or we found troubles, we would provide fixes." (Ibid., pg. 59)

By 1964, however, the Labs was adopting third generation computer equipment and had to decide whether they would build their own operating system or go with one that was built outside the Labs. Vyssotsky recounts the process of deliberation at the time, "Through a rather murky process of internal deliberation we decided to join forces with General Electric and MIT to create Multics," he explains. The Labs planned to use the Multics operating system "as a mainstay for Bell Laboratories internal service computing in precisely the way that we had used the BESYS operating system." (Ibid., pg. 59)
History of UNIX fix-me

In the Unisys quote the OS is referred to as "BELLMON" and in Hauben's quote of the Unix Review article it is referred to as "BESYS". In any case, these "monitor"s allowed jobs to be queued in card readers or on magnetic tape to increase throughput of the work load. This is what became known as "batch processing".

What is interesting to note is the distribution of the Bell Labs OS to those outside the Labs. This pattern reasserts itself ten years later with the distribution of Unix.

Linux is a time sharing OS.

The concept of time sharing is a big one. It has many important implications without which the use of computers would never have grown beyond the need for a handful for the entire World.

I think there is a world market for maybe five computers. (Thomas Watson, chairman of IBM, 1943)
Editor's Notes
The impetus for time-sharing first arose from professional programmers because of their constant frustration in debugging programs at batch processing installations. Thus, the original goal was to time-share computers to allow simultaneous access by several persons while giving to each of them the illusion of having the whole machine at his disposal.
Introduction and Overview of the Multics System 1965 Fall Joint Computer Conference

Well, that is a rather near-sighted and egotistical view written back in 1965. From the vantage point of the new millennium looking back on the fabric of history, the pattern that emerges is somewhat different.

A person who played a pivotal role was neither a computer scientist nor particularily interest in computers. Dr. J.C.R. Licklider was interested in psychoacoustics. Howard Rheingold wrote a very interesting chapter in "Tools for Thought" on Licklider and how his focus turned to computing as an aide to helping him with his primary interest, understanding human communications. There were others following parallel lines of inquiry (Doug Engelbart for example) but it is Licklider who we follow here because of the position he was to occupy in the early 1960s.

In 1957 Licklider performed an informal time-motion study of his own activities as he went about his day as a research scientist, what he found was that he spent the vast majority of his time performing "clerical" tasks. Tasks ranging from putting things in files, taking things out of files, performing calculations, to plotting graphs of data and other numerical data management. All this work as preparation to what he saw as his primary job function, analysis - interpreting what all the information might mean.

After obtaining tenure at MIT Licklider joined the engineer consulting firm of Bolt, Beranek & Newman. There he was able to directly interact with a PDP-1. However he soon learned that even the new mini-computers then available were still far too crude to study neural models of pitch preception. Having had "a kind of religious conversion" to interactive computing he wrote a book in 1959 called "Libraries of the Future.

Then turning his attention more fully towards computer science and applying his formidible knowledge of human factors, he wrote the paper "Man-Computer Symbiosis" in 1960. Licklider also spent considerable time at MIT's Lincoln Laboratory in Lexington, Massachusetts working on top-secret defense research related to graphic displays.

"I got Jack (Ruina, Director of ARPA in the early 1960s) to see the pertinence of interactive computing, not only to military command and control, but to the whole world of day-to-day business," Licklider recalls. "So, in October, 1962 I moved into the Pentagon and became the director of the (newly created) Information Processing Techniques Office."

It was an office he was conducted to, upon his arrivial at the Pentagon, not a lab. He was given a budget and a free hand to establish a new state-of-the-art in computing technology. The original mandate of the office was to extend the research carried out into computerization of air defense by the SAGE program to other military command and control systems. In particular, the IPTO was to build on SAGE's development of one of the first wide area computer networks for the cross country radar defense system, and build a survivable electronic network to interconnect the key DoD sites at the Pentagon, Cheyenne Mountain, and SAC HQ.

Through the office of IPTO, Licklider funded research into advanced computer and network technologies, and commissioned thirteen research groups to perform research into technologies related to human computer interaction and distributed systems. Each group was given a budget thirty to forty times as large as a normal research grant, complete discretion as to its use, and access to state-of-the-art technology.

The players who were to dance to Licklider's purse strings at MIT began jostling for position during the latter 50s.

John McCarthy wrote a memo to his department head on January 1, 1959 (Memorandum to P.M. Morse Proposing Time Sharing) In the memo he addresses some important issues in implementing a time sharing system.

In Reminiscences on the History of Time Sharing John McCarthy gives his point of view on the development of time sharing. As he left for Stanford in 1962 he does not play much of a role beyond his memo and chairmanship of the working committee before his departure [fix me] in the line of history we are concerned with here.

Fernando J. Corbato took the lead of the Compatible Time Sharing System project. First demonstrated in November of 1961 at the MIT Computation Center. A paper was presented at the 1962 Spring Joint Computer Conference describing CTSS, An Experimental Time Sharing System. Tom Van Vleck, a student at MIT in the early 60s has written The IBM 7094 and CTSS.

The CTSS project was used to bootstrap the Multics time-sharing operating system environment. In 1964 GE was selected as the hardware supplier and GE formally joined in project. At about this same time over at Bell Labs they were faced with developing a new operating system for the new generation of transistorized computers or out-sourcing it to a third party. In November of 1964 Bell Labs joined the Multics project with funding and personnal.

Critics charged Multics as being pre-mature and over-reaching. There is some justification to these charges as the project ran into a number of delaying snags waiting for various technology to be developed upon which to build. With the enormous funding being poured into it by IPTO, GE, and Bell Labs the project bloated and floundered for a number of years. In April of 1969 Bell Labs formally withdrew from the project citing cost over-runs and lack of useful results.

Linux is a UNIX-like OS

Linux is a clone of the Unix kernel, written from scratch by Linus Torvalds with assistance from a loosely-knit team of hackers across the Net. It aims towards POSIX and Single UNIX Specification compliance. It has all the features you would expect in a modern fully-fledged Unix kernel, including true multitasking, virtual memory, shared libraries, demand loading, shared copy-on-write executables, proper memory management, and TCP/IP networking.
Linux project page on Freshmeat Patrick Lenz - Friday, February 2nd 2001 09:18 EST

"Linux is a clone of the Unix kernel . . . "
This more than any other reason is why this history is so important to the understanding of Linux. Only through the study of Unix history and culture can one hope to gain a most helpful historical prospective of where Linux came from. Linux is the true decendent and continuation of the Unix culture that came into being during the 70s.

Every branch of engineering and design has technical cultures. In most kinds of engineering, the unwritten traditions of the field are parts of a working practitioner's education as important as (and, as experience grows, often more important than) the official handbooks and textbooks. Senior engineers develop huge bodies of implicit knowledge, which they pass to their juniors by (as Zen Buddhists put it) ``a special transmission, outside the scriptures''.

The Unix culture is one of these. The Internet culture is another -- or, as the millennium turns, perhaps the same one. The two have grown increasingly difficult to separate since the early 1980s, and in this book we won't try particularly hard.
The Art of Unix Programming by Eric S. Raymond (from his forth coming book)

Ken Thompson and Dennis Ritchie were among those who had been part of the Multics project from Bell Labs and felt most sharply the loss of the direct interactive environment they had grown use to. They were not happy to be back at the Labs and facing a return to the batch processing methodology.

UNIX was never a "project", it was not designed to meet any specific need except that felt by its major author, Ken Thompson, and soon after its origin by the author of this paper, for a pleasant environment in which to write and use programs.
The UNIX Time-sharing System--A Retrospective by Dennis M. Ritchie, 1978
From the point of view of the group that was to be most involved in the beginnings of Unix (K. Thompson, Ritchie, M. D. McIlroy, J. F. Ossanna), the decline and fall of Multics had a directly felt effect. We were among the last Bell Laboratories holdouts actually working on Multics, so we still felt some sort of stake in its success. More important, the convenient interactive computing service that Multics had promised to the entire community was in fact available to our limited group, at first under the CTSS system used to develop Multics, and later under Multics itself. Even though Multics could not then support many users, it could support us, albeit at exorbitant cost. We didn't want to lose the pleasant niche we occupied, because no similar ones were available . . .
The Evolution of the Unix Time-sharing System by Dennis M. Ritchie fix-me

Like many hackers Ken Thompson had a game he liked to play. Space Travel was a simulation of the solar system with the object being to land on the various planets and satellites. Space Travel led him to a little used PDP-7.

Ken first did Space Travel on the GE 635, before it had a time-sharing facility, but it did have an "interactive batch" mechanism. A job would be submitted, you could type to it, and there were several locally-built displays attached as peripherals. It was expensive to run; a game would cost about $50. Of course it was internal "funny money."

More or less at the same time, Ken discovered the PDP-7, which had been fitted out with a nice vector display. The display was a joint Bell Labs and Digital Equipment design, built by a nearby department as an output facility for the (then) main IBM 7094 computer. This PDP-7 Graphics-II system was much neater than the 635's display, even if older. The PDP-7 was a real computer, not a peripheral, albeit small even by 1968 standards. By then, it was not used much and it was free even by funny-money standards. So Ken moved ST to it, and ran the game standalone. The program was written from scratch, all in assembly language, and even including a complete floating-point arithmetic simulator.

Also, about this time, Ken again got the urge to write his own operating system. He had started on such a project before, but on a much bigger machine -- the GE 645 Multics machine. It didn't take long to realize that he could't keep the machine.

Because Ken was now familiar with the '7 and knew he could use it as much as he wanted, the first version of Unix was written on this PDP-7. So ST came before Unix, but doing ST led him to a place in which he could write the first version of Unix.
The Space Travel Game by Dennie M. Ritchie
fix-me

Space Travel, though it made a very attractive game, served mainly as an introduction to the clumsy technology of preparing programs for the PDP-7. Soon Thompson began implementing the paper file system (perhaps `chalk file system' would be more accurate)[note: prior to finding the PDP-7 "Thompson, R. H. Canaday, and Ritchie developed, on blackboards and scribbled notes, the basic design of a file system that was later to become the heart of Unix." ibid] that had been designed earlier. A file system without a way to exercise it is a sterile proposition, so he proceeded to flesh it out with the other requirements for a working operating system, in particular the notion of processes. Then came a small set of user-level utilities: the means to copy, print, delete, and edit files, and of course a simple command interpreter (shell). Up to this time all the programs were written using GECOS and files were transferred to the PDP-7 on paper tape; but once an assembler was completed the system was able to support itself. Although it was not until well into 1970 that Brian Kernighan suggested the name `Unix,' in a somewhat treacherous pun on `Multics,' the operating system we know today was born.
The Evolution of the Unix Time-sharing System fix-me
Thompson wanted to create a comfortable computing environment constructed according to his own design, using whatever means were available. His plans, it is evident in retrospect, incorporated many of the innovative aspects of Multics, including an explicit notion of a process as a locus of control, a tree-structured file system, a command interpreter as user-level program, simple representation of text files, and generalized access to devices. They excluded others, such as unified access to memory and to files. At the start, moreover, he and the rest of us deferred another pioneering (though not original) element of Multics, namely writing almost exclusively in a higher-level language. PL/I, the implementation language of Multics, was not much to our tastes, but we were also using other languages, including BCPL, and we regretted losing the advantages of writing programs in a language above the level of assembler, such as ease of writing and clarity of understanding. At the time we did not put much weight on portability; interest in this arose later.
The Development of the C Language by Dennis M. Ritchie
fix-me
The PDP-7 Unix file system Structurally, the file system of PDP-7 Unix was nearly identical to today's. It had
  1. An i-list: a linear array of i-nodes each describing a file. An i-node contained less than it does now, but the essential information was the same: the protection mode of the file, its type and size, and the list of physical blocks holding the contents.
  2. Directories: a special kind of file containing a sequence of names and the associated i-number.
  3. Special files describing devices. The device specification was not contained explicitly in the i-node, but was instead encoded in the number: specific i-numbers corresponded to specific files. The important file system calls were also present from the start. Read, write, open, creat (sic), close: with one very important exception, discussed below, they were similar to what one finds now. A minor difference was that the unit of I/O was the word, not the byte, because the PDP-7 was a word-addressed machine. In practice this meant merely that all programs dealing with character streams ignored null characters, because null was used to pad a file to an even number of characters. Another minor, occasionally annoying difference was the lack of erase and kill processing for terminals. Terminals, in effect, were always in raw mode. Only a few programs (notably the shell and the editor) bothered to implement erase-kill processing.

    In spite of its considerable similarity to the current file system, the PDP-7 file system was in one way remarkably different: there were no path names, and each file-name argument to the system was a simple name (without `/') taken relative to the current directory. Links, in the usual Unix sense, did exist. Together with an elaborate set of conventions, they were the principal means by which the lack of path names became acceptable.
    The Evolution of the Unix Time-sharing System fix-me

By `process control,' I mean the mechanisms by which processes are created and used; today the system calls fork, exec, wait, and exit implement these mechanisms. Unlike the file system, which existed in nearly its present form from the earliest days, the process control scheme underwent considerable mutation after PDP-7 Unix was already in use. (The introduction of path names in the PDP-11 system was certainly a considerable notational advance, but not a change in fundamental structure.)
The Evolution of the Unix Time-sharing System fix-me
Processes (independently executing entities) existed very early in PDP-7 Unix. There were in fact precisely two of them, one for each of the two terminals attached to the machine. There was no fork, wait, or exec. There was an exit, but its meaning was rather different, as will be seen. The main loop of the shell went as follows.
  1. The shell closed all its open files, then opened the terminal special file for standard input and output (file descriptors 0 and 1).
  2. It read a command line from the terminal.
  3. It linked to the file specifying the command, opened the file, and removed the link. Then it copied a small bootstrap program to the top of memory and jumped to it; this bootstrap program read in the file over the shell code, then jumped to the first location of the command (in effect an exec).
  4. The command did its work, then terminated by calling exit. The exit call caused the system to read in a fresh copy of the shell over the terminated command, then to jump to its start (and thus in effect to go to step 1).

The most interesting thing about this primitive implementation is the degree to which it anticipated themes developed more fully later. True, it could support neither background processes nor shell command files (let alone pipes and filters); but IO redirection (via `<' and `>') was soon there; it is discussed below. The implementation of redirection was quite straightforward; in step 3) above the shell just replaced its standard input or output with the appropriate file. Crucial to subsequent development was the implementation of the shell as a user-level program stored in a file, rather than a part of the operating system.
The Evolution of the Unix Time-sharing System fix-me

As noted in the last paragraph seperating the shell from the OS laid the ground work for multiple and alternative shell development which has proven to be one of Unix's great strengths. Yet of even greater importance is the concept of independent processes as the focus of control. [fix me]

The very convenient notation for IO redirection, using the `<' and `>' characters, was not present from the very beginning of the PDP-7 Unix system, but it did appear quite early. Like much else in Unix, it was inspired by an idea from Multics. Multics has a rather general IO redirection mechanism [3] embodying named IO streams that can be dynamically redirected to various devices, files, and even through special stream-processing modules. Even in the version of Multics we were familiar with a decade ago, there existed a command that switched subsequent output normally destined for the terminal to a file, and another command to reattach output to the terminal. Where under Unix one might say

ls >xx

to get a listing of the names of one's files in xx, on Multics the notation was

iocall attach user_output file xx
list
iocall attach user_output syn user_i/o

Even though this very clumsy sequence was used often during the Multics days, and would have been utterly straightforward to integrate into the Multics shell, the idea did not occur to us or anyone else at the time. I speculate that the reason it did not was the sheer size of the Multics project: the implementors of the IO system were at Bell Labs in Murray Hill, while the shell was done at MIT. We didn't consider making changes to the shell (it was their program); correspondingly, the keepers of the shell may not even have known of the usefulness, albeit clumsiness, of iocall. (The 1969 Multics manual [4] lists iocall as an `author-maintained,' that is non-standard, command.) Because both the Unix IO system and its shell were under the exclusive control of Thompson, when the right idea finally surfaced, it was a matter of an hour or so to implement it.
The Evolution of the Unix Time-sharing System fix-me

appropriate scale of "chunking" plays an important role in modern Open Source philosophy. Optiminal package size is one where the maintainer is able to comprehend the whole of it. Thus, when a change is needed the maintainer is able to quickly pinpoint where in the code to make the alteration. [fix me]

Attached by magnet to the wall of my office is a yellowed sheet of paper, evidently the tenth page of an internal Bell Labs memo by Doug McIlroy. Unfortunately, I don't have the rest of the note. (...memo written, in 1964) . . .

1. We should have some ways of connecting programs like garden hose--screw in another segment when it becomes when it becomes necessary to massage data in another way. This is the way of IO also. (ibid)

Point 1's garden hose connection analogy, though, is the one that ultimately whacked us on the head to best effect. (ibid)
Prophetic Petroglyphs fix-me

One of the most widely admired contributions of Unix to the culture of operating systems and command languages is the pipe, as used in a pipeline of commands.

Pipes appeared in Unix in 1972, well after the PDP-11 version of the system was in operation, at the suggestion (or perhaps insistence) of M. D. McIlroy, a long-time advocate of the non-hierarchical control flow that characterizes coroutines. (ibid)

. . . thanks to McIlroy's persistence, pipes were finally installed in the operating system (a relatively simple job), and a new notation was introduced. (ibid)

The new facility was enthusiastically received, and the term `filter' was soon coined. Many commands were changed to make them usable in pipelines. For example, no one had imagined that anyone would want the sort or pr utility to sort or print its standard input if given no explicit arguments. (ibid)

The pipe notation using `<' and `>' survived only a couple of months; it was replaced by the present one that uses a unique operator to separate components of a pipeline. (ibid)
Pipes Dennis Ritchie, The Evolution of the Unix Time-sharing System

Every program for the original PDP-7 Unix system was written in assembly language, and bare assembly language it was­for example, there were no macros. Moreover, there was no loader or link-editor, so every program had to be complete in itself. The first interesting language to appear was a version of McClure's TMG that was implemented by McIlroy. Soon after TMG became available, Thompson decided that we could not pretend to offer a real computing service without Fortran, so he sat down to write a Fortran in TMG. As I recall, the intent to handle Fortran lasted about a week. What he produced instead was a definition of and a compiler for the new language B. B was much influenced by the BCPL language; other influences were Thompson's taste for spartan syntax, and the very small space into which the compiler had to fit. The compiler produced simple interpretive code; although it and the programs it produced were rather slow, it made life much more pleasant. Once interfaces to the regular system calls were made available, we began once again to enjoy the benefits of using a reasonable language to write what are usually called `systems programs:' compilers, assemblers, and the like. (Although some might consider the PL/I we used under Multics unreasonable, it was much better than assembly language.) Among other programs, the PDP-7 B cross-compiler for the PDP-11 was written in B, and in the course of time, the B compiler for the PDP-7 itself was transliterated from TMG into B.

When the PDP-11 arrived, B was moved to it almost immediately. In fact, a version of the multi-precision `desk calculator' program dc was one of the earliest programs to run on the PDP-11, well before the disk arrived. However, B did not take over instantly. Only passing thought was given to rewriting the operating system in B rather than assembler, and the same was true of most of the utilities. Even the assembler was rewritten in assembler. This approach was taken mainly because of the slowness of the interpretive code. Of smaller but still real importance was the mismatch of the word-oriented B language with the byte-addressed PDP-11.

Thus, in 1971, work began on what was to become the C language . . . Perhaps the most important watershed occurred during 1973, when the operating system kernel was rewritten in C. It was at this point that the system assumed its modern form; the most far-reaching change was the introduction of multi-programming. There were few externally-visible changes, but the internal structure of the system became much more rational and general. The success of this effort convinced us that C was useful as a nearly universal tool for systems programming, instead of just a toy for simple applications.
The Evolution of the Unix Time-sharing System fix-me


fix-me

Dennis Ritchie wrote a paper Early Unix History and Evolution which privides deep insight into the situation surrounding the development of UNIX. A slightly expanded retelling of Ken Thompson's Space Travel. Right on the heels of the first Unix kernel was the beginnings of the C system programming language (See C History). The first Unix Programmer's Manual, dated November 3, 1971 can be found here.

As work continued on the Bell Labs operating system, the researchers developed a set of principles to guide their work. Among these principles were:
(i) Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features.

(ii) Expect the output of every program to become the input to another, as yet unknown, program. Don't clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don't insist on interactive input.

(iii)Design and build software, even operating systems, to be tried early, ideally within weeks. Don't hesitate to throw away the clumsy parts and rebuild them.

(iv)Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you've finished using them.

(from M.D. McIlroy, E.N.Pinson, and B.A. Tague "Unix Time-Sharing System Forward", The Bell System Technical Jounal, July -Aug 1978 vol 57, number 6 part 2, pg. 1902)


History of Unix by Ronda Hauben

The computer workers at Bell Labs had real World problems to solve. They developed Unix as a tool to solve those problems. A philosophy of utility was instilled into the design and emerging culture of Unix.

Linux is released under the GPL

Linux is nearly hardware independent[fix me]
The Linux Edge


fix-me
Certainly the colossal and repeated blunders of AT&T, Sun, Novell, and other commercial vendors and standards consortia in mis-positioning and mis-marketing Unix have become legendary.
The Art of Unix Programming fix-me

It was the very attempt to make Unix a proprietary product that broke it. At its very roots Unix was developed not just as a tool for the lone programmer but for the community environment it created. Closing its source violates this intregral element of "community".