Lexikon's History of Computing

A Brief History of the Microprocessor

Return to Title Page

This article is copyrighted by Richard Birkby, reprinted here with permission.)

(c) Richard Birkby 1995

Abstract

The following article describes the evolution of the microprocessor which focused on the technology, the companies and the people behind the invention. It begins with a brief history before the Intel 4004, then describes the designing of the chip. It follows the microprocessor through its iterations to RISC, parallel processing and to today's super-RISC designs. (Footnoted references are indicated by a + sign.)

Background

The first Integrated Circuit(IC) was invented in late 1958 by Jack S. Kilby working for Texas Instruments (+1). The company was an innovative manufacturer of transistors and Kilbys first job with the company was designing micromodules (+2) for the military. This involved connecting many germanium wafers of discrete components together by stacking each wafer on top of one another.

Connections were made by running wires up the sides of the wafers. Kilby saw this process as unnecessarily complicated. He realized that if a piece of germanium was engineered properly, it could act as many components simultaneously. Thus the first IC was born. A year later (+3), Fairchild Semiconductor (founded in 1957), a division of Fairchild Camera & Instrument Corporation invented the modern silicon diffusing process, or planar process which is still used today. The IC process gradually evolved over the next ten years including moving the development process over to computer aided design in 1967 (+4).

In June 1968, Robert Noyce (who had helped in developing Fairchild's silicon process), Gordon Moore and Andrew Grove resigned from Fairchild and founded their own company. Intel (short for Integrated Electronics) was born. The departure of the three men was significant not least because Robert Noyce was a co-founder and vice president of Fairchild.

The reason behind the departure was the skepticism of the Fairchild managers towards the future of the integrated circuit. Thus Fairchild's subsidiary semiconductor operation resented the managers as they felt the invention had a great deal to offer.

The Reasons Behind Producing an Integrated CPU.

Busicom, a trading name of a now defunct Japanese company called ETI, was planning a range of next generation programmable calculators.Busicom had designed 12 chips and asked Intel to produce them. This was as a result of Intel's expertise with high transistor count chips. Marcian Hoff Jr. was assigned to the project and after studying the designs concluded its complexity far exceeded the usefulness of a calculator. Hoff was able to contrast the design with the DEC Program Data Processor 8 (PDP-8). The PDP-8 had a relatively simple architecture, yet could perform very high level operations. Hoff realized a general purpose processor would be a simpler design, yet able to handle a greater number of tasks. The MCS-4 chipset and in particular, the 4004 integrated CPU were thus conceived. In 1969 Busicom chose Intel's "Microcomputer on a chip" (+5) (the word microprocessor wasn't used until 1972) (+6) over its own 12 chip design. Busicom's contract with Intel stated Busicom had exclusive rights to buy the new chip set (CPU, ROM, RAM, I/O), however Intel agreed with Busicom in 1971 that in exchange for cheaper chip prices, Intel would have full marketing rights enabling them to sell the chips to whoever wanted to buy.

Intel CPU Design to the 8086

In late 1969, after the 4004 instruction set had been defined, Computer Terminals Corporation (CTC) asked Intel to develop an LSI registers chip for a new intelligent terminal they were developing. Due to experience with the 4004 and the furious pace of development within the industry, Stan Mazor(who had helped on the 4004) and Hoff agreed that they would put the complete processor on one chip. The 8008, an 8 bit processor was defined and work began immediately. The chip was rejected by CTC as it required many support chips (a minimum of 20 TTL packages for memory and I/O) and was too slow. Chip design continued in parallel to the MCS-4 and in April 1972 the 45 instruction CPU was launched.

The chip became a great success. Intel looked at the CTC rejection of the 8008 and realized they had to make a general purpose processor requiring only a handful of support chips. The Intel 8080 was born in April 1974 even though it was announced earlier. Intel did this to give customers sufficient lead times to design the chip into their products. The 8080 had 4,500 transistors, twice the number in the 4004 and could address 64K bytes of memory. Its speed was mainly down to the use of electron doped technology as opposed to hole doped MOS transistors. The chip was an astounding success and became and industry standard, emulated by other companies.

In 1978, Intel produced its first 16 bit processor, the 8086. It was source compatible with the 8080 and 8085 (an 8080 derivative). This chip has probably had more effect on the present day computer market than any other, although whether this is justified is debatable; the chip was compatible with the 4 year old 8080 and this meant it had to use a most unusual overlapping segment register process to access a full 1 Megabyte of memory.

The Early Years: Not Just Intel

Although Intel had invented the microprocessor and had grown from a three man start-up in 1968 to an industrial giant by 1981 with 20,000 employees and revenues of $188 million (+7), they were not the only company developing microprocessors. By July 1974, 19 microprocessors were either available or announced (+8). By 1975, that number increased to 40 and by 1976 it was 54. Late 1972 saw the second ever processor, with Rockwells PPS-4, a 4 bit processor. Another 4 bit processor, the Texas Instruments TMS 1000 was introduced on the market in 1974, although it had been designed in 1971. This was around the same time as Intel's 4004, but TI failed to realize the potential, and left the TMS 1000 to spend its first three years controlling a Texas Instruments calculator. Surprisingly, the TMS 1000 was also the first microcontroller, as it contained its own RAM and ROM on chip.

By the late 1970s, the cost of the chip had fallen to a few dollars, and had become the processor of choice for consumer electronics. It was being produced in over forty variants and sold in the hundreds of millions. The staggering development in the field was also exemplified in 1974 by the National Semiconductor Processing And Control Element (PACE). National was a Fairchild offshoot and thus had a large skills base. Unfortunately, the chip was designed using hole doped MOS transistors. This resulted in a third of the speed of the chip if instead it had been designed using electron doping.

Clones

Due to the success of the Intel 8008, Zilog and Motorola produced competing chips. Motorola realized the potential of the microprocessor after seeing the 8008. In mid 1974 they launched the 6800, a processor in the same market as the 8080. Motorola launched the 6800 with a wide variety of support in the way of system oriented hardware. This integration proved a major factor in the popularity of the 6800, as it did not have Intel compatibility to fall back on. Popular as the chip was, it fell well short of a derivative designed by a group of engineers who left Motorola to begin their own company. MOS Technologies delivered their 6800 - influenced processor, the 6502 in 1975.

The 6502 was successful due to simple design, single power line and cheapness. It became a favorite for the emerging small home computer businesses including Apple, Commodore and Acorn. Being a simple yet powerful design it was able to hold its own against the later designed and more powerful Z80. As a result, it had an influence on the concept of Reduced Instruction Set Computing (RISC) and especially the Acorn ARM processor.

The Zilog chip, the Z80 was significant in that it was compatible with the 8080 yet added 80 more instructions. However this compatibility was not unexpected as Zilog was founded by engineers who had left Intel. Two of those engineers were Frederico Faggin and Masatoshi Shima who had designed the 4004 and 8080 for Intel. The Zilog (an acronym in which the Z stands for "the last word," the "i" for integrated and "log" for logic) Z80 was a very powerful processor including on-chip dynamic memory refresher circuits. This enabled system designers such as Sir Clive Sinclair (+9) to produce computers with very little extra circuitry and hence at very little cost.

A year after Intel produced their first 16 bit processor, Motorola introduced another influential and long lived chip, the 68000. It was able to address a massive 16 megabytes and was able, through intricate internal circuitry to act like a 32 bit processor internally. The chip found fame in the Macintosh, Amiga and Atari personal computers.

A new philosophy - RISC

Most commentators see RISC as a modern concept, more akin to the 1990s, yet it can be traced to 1965 and Seymour Cray's CDC (Control Data Corporation) 6600. RISC design emphasizes simplicity of processor instruction set, enabling sophisticated architectural techniques to be employed to increase the speed of those instructions. A classic example is the VAX architecture where the INDEX instruction was 45% to 60% faster when replaced by simpler VAX instructions. The CDC 6600 has many RISC features including a small instruction set of only 64 op codes, a load/store architecture and register to register operations. Also, instructions weren't variable lengths, but 15 or 30 bits long.

Although the term RISC was not used, IBM formalized these principles in the IBM 801(1975), an Emitter Coupled Logic (ECL) multichip processor. The architecture featured a small instruction set, load/store memory operations only, 24 registers and pipelining (+10).

When RISC became popular in the late eighties, IBM tried to market the design as the Research OPD (Office Products Division) Mini Processor (ROMP) CPU, but it wasn't successful. The chip eventually became the heart of the I/O processor for the IBM 3090. The term RISC first came from one of two University research projects in the USA. The Berkeley RISC 1 formed the basis for the commercial Scalable (formerly Sun) Processor Architecture (SPARC) processor, whilst Stanford University's Microprocessor (+11) without Interlocked Pipeline Stages (MIPS) processor was commercialized and is now owned by Silicon Graphics Inc. The Berkeley RISC I was begun by David A. Patterson and his colleagues in 1980. He had returned from a sabbatical at Digital Equipment Corporation in 1979 and had been contemplating the difficulties surrounding the designing of a CPU containing the VAX architecture. He submitted a paper to Computer on the subject, but it was rejected on the grounds of a poor use of silicon. The rejection made Patterson wonder what a good use of silicon was. This led him"down the RISC path" (+12).

The RISC I, II and SPARC families are unusual in that they feature register windows. A concept where a CPU has only a few registers visible to the programmer, but that set can be exchanged for another set, or window when the programmer chooses. This was intended to provide a very low subroutine overhead, by facilitating fast context switches. It was later acknowledged that a clever compiler can produce code for non-windowed machines which was nearly as efficient as a windowed processor. Windowing is difficult to implement on a processor, so the concept did not become popular on other architectures. Around the mid-eighties, the term RISC became somewhat of a buzzword. Intel applied the term to its 80486 processor although it was clearly nothing of the sort. Steve Przybylski, a designer on the Stanford MIPS project satirizes this in his definition of RISC. 'RISC: any computer announced after 1985' (+13).

Around the time the results of the Stanford and Berkeley projects were released, a small UK home computer firm, Acorn was looking for a processor to replace the 6502 used in its present line of computers. Their review of commercial microprocessors including the popular 8086 and 68000 concluded that they were not advanced enough, so in 1983 began their own project to design a RISC microprocessor. The result, ARM (Advanced RISC Machine, formerly Acorn RISC Machine) is probably the closest to true RISC of any processor available.

Parallelism- The Transputer

In 1979, Inmos was formed by the British government to produce innovative silicon based products competing on the world stage. The formation was partly in response to the increasing dominance of the market by the USA and the need to provide the UK with manufacturing facilities. During the summer of 1980, Inmos were working on its first microprocessor, however events were not smooth with two engineers having inflexible positions over their idea of the architecture for this microprocessor. David May who had been recruited from Warwick University and Robert Milne who had come from Scicon, a specialist company producing complex computer programs were the engineers.Milne felt that the Transputer, the name given for the Inmos chip, should be the first chip in the world specially tailored to run Ada. He felt this was the future of micro- processor design which was in strict contrast to May and Tony Hoare. Hoare was an academic guru from Warwick where he had worked with May and shared a simplistic approach to the Transputer design. Iann Barron, who had been the driving force behind Inmos became tired of this rambling and forced his view on the team.His views happened to encompass those of May but he also envisaged the multiplicity of individual processors all working concurrently (+14). The transputer came to market in 1985 as the T-212, a 16 bit initial version with a RISC like instruction set. Each chip uniquely had 4 serial links which enabled the microprocessor to be connected to other Transputers in a network. In 1994, the T-9000 was launched. It is a design optimized for use in parallel computers using a systolic array configuration.

The SuperRISCs

In 1988, DEC formed a small team that would develop a new architecture for the company. Eleven years previously, it had moved from the PDP-11 to the VAX architecture, but it was seen that it would start lagging behind by the early 1990s. The project turned out to be huge with more than 30 engineering groups in 10 countries working on the Alpha AXP architecture as it came to be known (+15).

The team were given a fabulous design opportunity; an architecture that would take DEC into the 21st century. To accommodate the 15-25 year life span of the processor, they looked back over the previous 25 years and concluded a 1000 fold increase in computing power occurred. They envisaged the same for the next 25 years, and so they concluded that their designs would, in the future, be run at 10 times the clock speed, 10 times the instruction issue rate, (10 times superscalar) and 10 processors working together in a system. To enable the processor to run multiple operating systems efficiently, they took a novel approach and placed interrupts, exceptions, memory management and error handling into code called PALcode (Privileged Architecture Library) which had access to the CPU hardware in a way which microcode normally has. This enables the Alpha to be unbiased toward a particular computing style.

The Alpha architecture was chosen to be RISC but crucially focused on pipelining and multiple instruction issue rather than traditional RISC features such as condition codes and byte writes to memory, as these slow down the former techniques. The chip was released in 1992 and in that year entered the Guinness Book of Records as the world's fastest single-chip microprocessor. While the Alpha attempts to increase instruction speed by simplifying the architecture and concentrating on clock speed and multiple issue, the PowerPC from IBM and Motorola is the antithesis to this. The PowerPC was born out of the needs of IBM and Motorola to develop a high performance workstation CPU. Apple, another member of the PowerPC alliance needed a CPU to replace the Motorola 680x0 in its line of Macintosh computers. The PowerPC is an evolution of IBMs Performance Optimized with Enhanced RISC (POWER) multichip processors. Motorola contributed the bus interface from their 88110 microprocessor.

Conclusion

The microprocessor has become a formidable force in computing. From a humble beginning as a concept of reducing the price of a calculator to high powered, uniprocessor and multiprocessor machines in only two and a half decades is astounding pace. Like most classic inventions, its early years belong firmly to the start-ups and pre-pubescent companies. These didn't have the baggage of the established companies and grew quickly. However, the mid 1980s saw a changeover, mainly due to the spiralling cost of research into process technologies and the greater man-hours needed to implement hundreds of thousand transistors design. This was headed by Motorola, Intel, IBM and DEC. It is now acknowledged that the RISC concept is the superior architectural concept and all the forementioned companies have leading designs using RISC.

The microprocessor was originally designed for a calculator, yet in recent years it has found its way into a multitude of designs. A seemingly exponential growth curve for applications has occurred. From cars to personal computers, televisions to telephones, the microprocessor proliferates, and the growth curve shows no signs of abating. This essay shows just part of the large history of the microprocessor and the path designs took. There are many other fields where the microprocessor has made a huge impact, not least in the low cost market, which deserve to be investigated further.

References

Footnotes:

1 Dummer, G. (1978), Electronic Inventions & Discoveries, pg 143

2 Slater, R. (1989), Portraits in Silicon, MIT Press

3 Dummer, G. (1978), Electronic Inventions & Discoveries , pg 141

4 Augarten, S. (1983), State of the Art, Ticknor and Fields, pg. 22

5 Electronic News, Intel advert, November 15 1971

6 Noyce, R., Hoff, Marcian, A History of Micr. Development at Intel, IEEE Micro, February 1981

7 Augarten, S. (1983), State of the Art , Ticknor and Fields, pg. 36

8 Noyce, R., Hoff, Marcian, A History of Micr. Development at Intel, IEEE Micro, February 1981

9 Dale, R. (1985), The Sinclair Story , G Duckworth & Co. Ltd., London , pg. 95

10 IEEE Transactions on Computers, VLSI Processor Architecture, December 1984

11 D. Hennessy, J. Patterson, (1990), Computer Architecture, a quantitative approach, pg. 189

12 IEEE Micro, August 1993, pg. 5

13 D. Hennessy, J. Patterson, (1990), Computer Architecture, a quantitative approach, Appendix E

14 Alpha, Communications of the ACM, Feb 1993, Vol 36, No. 2 pg. 32

15 IEEE Micro, October 1993, pg. 54

Microprocessor Chip (Intel 4004) (1969)

Microprocessor Chip (Intel 8008) (1972)

Microprocessor Chip (Intel 8088) (1981)

Microprocessor Chip (Intel 80386) (1985)

Microprocessor Chip (Intel 80586) (1993)

Microprocessor Chip (RCA 1802) (1974)

Microprocessor Chip (Rockwell R6511)

Microprocessor Mainframe CPU

Microprocessor History

Microprocessor Table: Intel

Microprocessor: Mainframe

 

 

Return to Title Page

Copyright © 1982-2000, Lexikon Services "History of Computing" ISBN 0-944601-78-2