Unix/Linux

John O’Gorman

john@og.co.nz

18 May 2017

1 Intro

This documents my history of involvement with Unix, then Linux, then Mac OS X.

2 Epoch

In the 1960s Bell Labs (a division of AT&T in Massachessetts) had participated with a consortium on a project called MultiCS (Multiplexed Computer System). Bell Labs withdrew and the consortium foundered under its own weight. Ken Thompson and Dennis Richie resolved to build a simple elegant Operating System which they called Unix (Uniplexed Computer System). Bell released it to universities and US military institutions and started the revolution in computing. The 1st of January 1970 was declared the Epoch and time started from then counting seconds from that date. Bell Labs employed a large number of geniuses who, once established, were allowed to work on whatever they wished to.
The first effect of the nearly free distribution of Unix to universities was that non computer specialists started to use computers. At Stanford University, for example, a classics lecturer wrote a program to parse and compile Pascal programs. Within 10 years the internet was created and built on Unix computers (mostly DEC PDP11 minicomputers).

3 Design

The concepts that give Unix its character are

4 History

The initial Unix kernel was written in PDP11 assembly language. But in 1972 Dennis Richie rewrote the kernel in C (a language developed in-house in Bell Labs as a successor to B). This resulted in Unix being easily converted to other CPU architectures with just a few hundred lines to be rewritten for the particular CPU. The C compiler then would look after the rest of the port. Manufacturers soon came out with their own Unix Variants: Hewlett Packard HPUX, IBM AIX, SUN (a commercial spinoff from Stanford University Network) Solaris, University of California Berkeley (UCB) BSD. We at O’Gorman Computer Consultants worked on all these machines.
While this was going on, people at PARC (Palo Alto Research Centre) a division of Xerox Corporation were working on, amongst other projects, experiments with a Graphical User Interface (GUI). This is where the 3 button mouse, graphic tablets, window managers, and all the components we see today got their first implementation. This work was taken up by Unix people at Massachussetts Institute of Technology (MIT) and they created X (a successor of W for Windows). The project spun off into X.org and is the basis of all Unix GUI systems.
In 1983 Richard Stallman also at MIT started the GNU (GNU is Not Unix) project which was a ground up redevelopment of all the standard Unix programs with a free open-source licence. He was working on the GNU Hurd version of the Unix kernel when
In 1991 Linux Torvalds a student at University of Helsinki, Finland wrote a free open-source kernel which got named Linux. The Linux kernel combined with the GNU programs should be called GNU-Linux but to Richard Stallman’s annoyance it is nearly always called just Linux. I met Richard when he attended a Uniforum congress I helped organise in Rotorua.
IBM ensured that Unix or Linux ran on all their computers from mainframe, minicomputers, microcomputers, down to their notebook Thinkpad (later sold off to Lenovo in China).

5 Hardware

5.1 Terminals

In the beginning you worked at a terminal which connected to a serial port on the back of the computer. Manufacturers had been making these before the advent ot Unix and common brands were the VT100 from Digital Equipment Corporation (DEC) and Wyse. You had to set up both the terminal and the Unix port to agree all the configuration values such as speed (called BAUD rate) such as 9600, whether to use 7-2-even or 8-1-none for 7 or 8 bit data characters, 2 or 1 stop bits, and even parity or none. Parity was a bit that was set so that the on bits were always even in number. This was to detect corruption by electrical or magnetic interference which would result in the character being resent. The Unix command stty was used to set the computer port and terminal had an interface which allowed you to adjust the terminal. Terminals displayed an array of characters 80 x 24 or 25 lines or 132 x 24 or 25 lines.

5.2 Desktop Computers

Later when desktop computers such as the IBM PC came into popularity the terminal was replaced by a monitor which had graphical capabilities. To run shell commands you needed a terminal emulator. Several came into being xterm (from X.org) becoming the most popular.
When businesses, particularly local bodies, took up Unix they gradually started to replace serial terminals with desktop computers. This led to management problems with key files scattered among desktop computers and servers. The next important development was an attempt to fix this.

5.3 Monitors

Monitors were different from terminals in that, instead of presenting arrays of character 80 x 25 they were rather arrays of dots 1024 x 768. Each dot could be of 256 colours. These standards were increased over the years as technology improved and memory got cheaper.

5.4 Xdisplays

These had originated at PARC (Palo Alto Research Centre) but were standardised by the X.org consortium and provided a GUI interface for remote computers. The usual nomenclature of client server was inverted in this case - the frontend Xterminal being labelled the X server and the remote programs it ran being the client. The Xdisplays were often called workstations. They depended on an area of memory called a frame buffer either in the computer or in special graphics cards. The frame buffer mapped onto the display and processes could write onto the frame buffer and the hardware looked after transferring changes to the screen. The PARC created a new idiom in computer program. Programs became event-driven, the events being mouse clicks, mouse drags, mouse drag and drops, mouse moves, keystrokes, etc. Programs changed to becoming Finite State Automota (FTAs) with the typical design being that the main function set up objects then entered a loop waiting for events, responding to them by sending messages to the relevant objects and altering the current state of the FTA.

5.5 Thin Clients

A thin client was a discless computer (now called a workstation) which at bootup time, received a configuration from a central computer and became an X server where the user was logged in on a remote computer but was having all his work displayed locally on the workstation. The boot up process was based on a project called Etherboot which allowed you to download code which could be installed in the Read Only Memory (ROM) of Ethernet cards. The Etherboot project was started by Jamie Honan of Sydney and taken over by Ken Yap in 1995. I met both Jamie and Ken in Sydney and later met Ken again when he came to another Uniforum conference in NZ. The project evolved and is now called LTSP (Linux Thin-client Server Project) and has been given the epithet Kiwi (which refers to the fruit not the bird!) for combining with a graphic imaging system called Kiwi.

5.6 VNC

Virtual Network Computing (VNC) was a project at Cambridge University in England. It is free and open-source and available with the GNU licence. It allows any computer to act as an Xserver connected to any other computer on a network. It is a good for places where (regrettably) the users have Microsoft machines on their desktop but need to interact with serious remote computers. There are 3 components to VNC: A server called Xvnc which runs on the Unix/Linux box, a viewer program called vncviewer which runs on the desktop, and a password program called vncpasswd which allows you to set and/or change your password. The vncviewer program uses the frame buffer to address the screen.

5.7 Xen

Another project which began life in Cambridge England was Xen which is a Unix implementation of IBM’s Virtual Machine architecture. Xen allows multiple instances of hardware to be simulated on a single computer or on a collection of them. Each VM is a software simulation of a physical machine. This sort of software has become the basis of the so-called Cloud. It has resulted in servers being removed from company offices to specialist firms which run the VMs on behalf of their clients. Regrettably these firms are often overseas and where this is so NZ is relinquishing control and therefore the security of their data.
The Xen application is inspired by IBM’s own mainframe operating system called MVS (Multiple Virtual Systems) which it operated last century. With the cost of computers, memory, and other electronic rocketing downwards, microcomputers now have terabytes of storage and memory can now support reliably the equivalent of dozens or even hundreds of customer machines.

5.8 Nagios

The advent of Cloud Computing with remote Virtual Machines has meant that the users are no longer in touch with the workings of their machines. This has led to the development of software to send alerts and alarms and warnings when things are about to go wrong or worse have gone wrong. The one we have expertise in is Nagios which can communicate with us or with the customer. We have described Nagios elsewhere on our website.