*
Looking for a bargain? – Check out the best tech deals in Australia

Why the IBM PC Used an Intel 8088

Intel was inside the first personal computer, but how and why it got picked is sometimes a matter of contention.

One of the big decisions IBM made in creating the original IBM PC was choosing to use the Intel 8088 processor as its central processing unit (CPU). This turned out to be hugely influential in establishing the Intel architecture—often called the x86 architecture—as the standard for the vast majority of the personal computer industry. But there are many stories around how the decision was made.

Up to that point, pretty much all the popular personal computers had run 8-bit processors. This included the Intel 8080 that was in the MITS Altair 8800 (the machine that led to Bill Gates and Paul Allen creating the first PC BASIC and then to the founding of Microsoft); the Zilog Z80, a chip that offered compatibility with the 8080 along with a variety of improvements and was used in the Osborne 1, Kaypro II and many other CP/M-based machines; and the MOS Technology 6502, which was used in the Apple II and the Commodore PET.

Intel followed its 8080 with the 8-bit 8085 and introduced the 16-bit 8086 in 1978. That was followed by the 8088, which had the same 16-bit internal architecture but was connected to an 8-bit data bus, in 1979. Meanwhile, some other more advanced chips were coming to market, such as the Motorola 68000 with 32-bit instructions, which was introduced in 1979 and would later be the processor in Apple's Lisa and Macintosh, Commodore Amiga, and a number of UNIX-based workstations. Both Gates and Allen say Microsoft talked IBM out of using an 8-bit processor and moving instead to the 16-bit 8088.

Here's how Gates described it in an interview I did with him for PC Magazine in March 1997:

"For IBM it was extremely different because this was a project where they let a supplier—a partner, whatever you call us—shape the definition of the machine and provide fundamental elements of the machine. When they first came to us, their concept was to do an 8-bit computer. And the project was more notable because they were going to do it so quickly and use an outside company ... The novel thing was: Could you work with outsiders, which in this case was mostly ourselves but also Intel, and do it quickly? And the key engineer on the project, Lou Eggebrecht, was fast-moving. Once we convinced IBM to go 16-bit (and we looked at 68000 which unfortunately wasn't debugged at the time so decided to go 8086), he cranked out that motherboard in about 40 days."

Allen is equally insistent on this view in his 2011 autobiography (Idea Man: A Memoir by the Cofounder of MicrosoftIdea Man: A Memoir by the Cofounder of Microsoft), saying, "After we talked them out of an 8-bit machine and won them over to the Intel 8086 (or as it turned out, the cheaper but virtually identical 8088), they wanted everything in our 16-bit cupboard."

Allen and Gates certainly believe that Microsoft led IBM to make that decision, but the IBM team tells a somewhat different story.

Dave Bradley, who wrote the BIOS (basic input output system) for the IBM PC, and many of the other engineers involved say IBM had already decided to use the x86 architecture while the project was still a task force preparing for management approval in August 1980.

In 1990, Bradley told ByteByte there were four reasons for choosing the 8088. First, it had to be a 16-bit chip that overcame the 64K memory limit of the 8-bit processors. Second, the processor and its peripheral chips had to be immediately available in quantity. Third, it had to be technology IBM was familiar with. Fourth, it had to have available languages and operating systems.

That all makes sense in leading to the decision for the 8086 or 8088. Newer chips like the Motorola 68000 didn't yet have the peripheral chips ready in the summer of 1980. And IBM was very familiar with the Intel family; indeed, Bradley had just finished creating control software for the IBM DataMaster, which was based on the 8-bit 8085. Bradley said IBM chose the 8088 with the 8-bit bus because it saved money on RAM, ROM, and logic chips.

Big Blues: The Unmaking of IBMBig Blues: The Unmaking of IBM, by Paul Carroll, suggests the PC team picked the 8-bit version because using a full 16-bit processor might have caused IBM's Management Committee to cancel the project for fear of hurting sales of its more powerful products. Bill Syndes, who headed hardware engineering for the project, has said similar things in a few interviews.

In Hard Drive: Bill Gates and the Making of the Microsoft EmpireHard Drive: Bill Gates and the Making of the Microsoft Empire, by James Wallace and Jim Erickson, Syndes said IBM considered several different chips, including the 68000, but the 68000 was still six to nine months later than IBM needed it to be. Jack Sams, who headed software development, said that IBM had already decided to use a 16-bit chip before Gates was contacted but that they hadn't told him.

The decision to use the 8088 would set the stage for industry-standard computing that continues to this day. Many other companies created machines that ran the 8088 and later the 8086; this continued as Intel introduced the 80286 (which IBM used in its PC AT in 1984) and later designs. AMD originally contracted with Intel to be a second source of 8086 and 8088 processors, since in the early days of the industry, companies wanted multiple suppliers. Later, after much litigation, AMD created its own x86-compatible architecture. Together, the two firms dominated processor sales for the PC market, with the notable exception of Apple, which used the Motorola architecture and the IBM-Motorola PowerPC architecture until moving to Intel.

Today, virtually every laptop and desktop sold remains capable of running software designed for the original 8088/86 architecture, and thus for the original IBM PC.

For more, check out PCMag's full coverage of the 40th anniversary of the IBM PC:

About Michael J. Miller