More as a way of comitting my thoughts to words than anything else, here is my decidedly idiosyncratic view of the processors used during the Home Computer Revolution of the late Seventies and early Eighties, and a couple more besides. I do not pretend to know the minutiae of each one, so these thoughts are based on my experience, limited in some cases(!)
Used in such greats as the Apple II, all the Acorn machines, the Orics and more. Somewhat simpler device than the Z80 with fewer instructions, fewer addressing modes and fewer registers. Just the minimum compliment of Accumulator, X and Y in fact. It did have the unique ability though to access the first page of memory (0000h to 00ffh) much faster than the rest of memory.
Many people claim now that the 6502 was the first 'RISC' chip, although there weren't many instructions to 'reduce'. If you stretch the point that zero page fast access was akin to having lots of registers though, that sounds slightly RISC-like. It was completely un-RISC-like in that zero page was only good for storing data (and incrememting, decrememting IIRC), all arithmetic had to be done on the Accumulator, and although X and Y were both 'indexing' registers, there were some sorts of indexing that only X could do, and others that only Y could do.
This processor formed the heart of the Sinclair machines (except the QL).
These were almost 16-bit devices. The 6809 in particular was somewhat similar to the Z80, but you could join pairs of 8-bit registers to use for 16-bit operations.
First seen in the mass market in the Atari ST, the Commodore Amiga and of course, the Apple Macintosh. These were the first true 16-bit home micros. At the time, just as there was a debate between the Z80 and the 6502, there was no way of telling whether 68000 or 8086 would be the dominant 16-bit processor.
Sinclair used a 68000 variant, the 68008, which had a multiplexed 8-bit data bus in his ill-fated QL. Would it have fared better if he'd fitted it with a decent keyboard and disc drives? Probably not - a small company based in Cambridge, known as Cambridge Systems Technology tried that with the Thor, and who has ever heard of the Thor?
The 68000 was simpler to use than the 8086, having a flat address space, but despite 68020, 68030 and 68040 designs with co-processors and 32-bit data paths, we all know the story.
Acorn always had a reputation for weirdness, and I suppose this was the ultimate. While everyone else went 16-bit (or disappeared altogether), Acorn just kept selling variations on the same 8-bit theme. Then, all of a sudden, in 1987, they launched a machine known as Archimedes. It was based on an entirely new processor; the Acorn Risc Machine. This was fully 32-bit data, although it only boasted a 26 bit (equivalent) address bus. It was the first RISC-based home micro in production.
The ARM chip owed a lot to the experience of its designers with the 6502 upon which its instruction set was based, but it introduced a couple of new ideas. First it had four processor modes with 16 general-purpose registers available. Some of the 16 were different in each mode. It also introduced conditional execution of instructions, avoiding many jumps in code, and helping increase the efficiency of the pipeline. The other interesting feature was its ability to use a barrel-shifter on one of the operands of an instruction with no performance penalty. In other words, a multiply and add can be done in one instruction. This is the kind of technology that Intel are hyping with their 'MMX' Pentiums. Yes, I know MMX is more than that, but it does say something...
The first ARM chip was available as a second processor for Acorn's 8-bit micros. The ARM chip in the Archimedes was an ARM 2 which ran at 8 MHz. The ARM 3 was installed in several later machines running at speeds up to 25 MHz. Its greatest performance boost came from a simple onboard 4k cache. It was after this that ARM Ltd was spun off from Acorn and started licensing the designs. They came up with the ARM 6 macrocell (what happened to 4 and 5?) and turned it into the ARM 610 processor used in the first Risc PCs. It was coupled with an 8k cache, full 32-bit addressing mode, better cache algorithms and 30 MHz clock. The ARM 710 soon followed with a few preformance tweaks, running at 40 MHz, and the ARM 810 was announced.
Then along came Digital. I'm not sure who initiated the pairing, but somehow Digital Equipment Corp, makers of the blindingly fast Alpha processors, got hold of the ARM designs, and built a processor using their semiconductor expertise. The result was the StrongARM; a processor that functionally is little different from the ARM 710 except that it is (internally) clocked at 202 MHz. Oh yes, it also has two 8k caches; one for instructions and one for data. Rumour has it that the interpreter of RiscOS's built-in BASIC fits neatly into the instruction cache. If this is the case, it explains why interpreted BBC BASIC V is so flippin' fast. The other thing, and this is the cause of most of the few software problems, is that the length of the pipeline has been increased, so that self-modifying code which relies on knowing the length of the pipeline to calculate the PC gets in a real mess.
The 8086 was Intel's first truly 16-bit processor, although it appeard to be a somewhat rushed development of the 8-bit 8080 which was an enhanced 8008 which was directly related to the very first (4-bit) general purpose processor - the 4004. No-one seems to know quite why IBM decided to use the 8086 in their Personal Computer, although there weren't many 16-bit processors around at that time.
The '286 was what the '86 should have been, but its development too was rushed. Someone didn't check the design quite properly and a bug was introduced in the 286 that was so useful it had to be carried forward into the '386, the '486, and even (in a limited sense) the Pentium. It is interesting, though fruitless, to speculate what would have happened to the 'PC' industry had this bug not existed...
Put simply, this processor had two modes of addressing memory, one was compatible with the 8086 and could only address one megabyte. The other was simpler and could address 64 meg. Actually, the 8086 mode was supposed to be able to address just one meg. This was achieved not by using a 20-bit address, but by using a 16-bit address and 4-bit index. This is the root of the 64k segment size problem that dogs DOS to this day.
Since DOS only ran in 8086 mode there was no easy way to address more than one meg of memory, and various standards were set up to allow access to addresses beyond the 1 meg barrier. The bug in the '286 was useful in that it allowed 8086 mode programs to see the first 64k above 1 meg. It involved some weird messing about with the keyboard controller to toggle the state of the 21st address line, and this is why you still see on some systems an 'A20 handler'.
All of the systems to use memory beyond 1 meg used some form of paging whereby portions of the extra memory were 'mapped' into spare bits of the address range below 1 meg, but they all suffered the problem that you couldn't easily tell which addresses were safe to use. Memory below 640k was verbotten because DOS needed all the 'real' memory it could get. Addresses from 640k to 1 meg were set aside by DOS for 'peripherals'; Hard Disc controllers, Video cards and so on. Most systems had some part of this area free, so those addresses could be used to page in extra memory. Not all systems had the same areas free though, and that's why the '286 bug was a godsend. It was a part of memory that wasn't set aside for anything else, and was guaranteed to be free. Memory could be paged into and out of this 64k chunk quite happily, and you could access as many megabytes as you could afford. It was a bit like trying to breathe through a straw, but it was possible.