What is the difference between a machine instruction and a micro-op? I found a following definition here:
A small, basic instruction, used in series to ma
Quite a few years ago it was discovered that RISC was better than CISC. If you wanted a very high speed processor then you wanted all your instructions to be very simple. This allows them to complete in a short period of time and thus a higher clock speed. So Andrew Tanenbaum predicted that "5 years from now no one will be running x86". That was in the 90s.
So what happened? Isn't the x86 (and thus AMD64, also known as x86_64) the best known CISC instruction set? Well, don't underestimate the ingenuity of the Intel (and AMD) engineers. Realising that if they wanted a higher speed processor (back with one core we were hitting > 4GHz in the late 90s) that they couldn't process their complex instructions in one clock cycle. The solution was to use "micro-ops".
Each micro-op would execute in a single clock cycle and would resemble a RISC type instruction. When the processor encountered the CISC instruction it would decode it into several micro-ops, each of which would last one cycle. Thus they could keep their old cruddy instruction set architecture (for backwards compatibility) and have very high clock speeds.