I\'m writing a compiler for a subset of Pascal. The compiler produces machine instructions for a made-up machine. I want to write a peephole optimizer for this machine language,
An easy way to do this is to implement your peephole optimizer as a finite state machine.
We assume you have a raw code generator that generates instructions but does not emit them, and an emit routine that sends actual code to the object stream.
The state machine captures instructions that your code generator produces, and remembers sequences of 0 or more generated instructions by transitioning between states. A state thus implicitly remembers a (short) sequence of generated but un-emitted instructions; it also has to remember the key parameters of the instructions it has captured, such as a register name, a constant value, and/or addressing modes and abstract target memory locations. A special start state remembers the empty string of instructions. At any moment, you need to be able to emit the unemitted instructions ("flush"); if you do this all the time, your peephole generator captures the next instruction and then emits it, doing no useful work.
To do useful work, we want the machine to capture as long a sequence as possible. Since there are typically many kinds of machine instructions, as practical matter you can't remember too many in a row or the state machine will become enormous. But it is practical to remember the last two or three for the most common machine instructions (load, add, cmp, branch, store). The size of the machine will really be determined by lenght of the longest peephole optimization we care to do, but if that length is P, the entire machine need not be P states deep.
Each state has transitions to a next state based on the "next" instruction I produced by your code generator. Imagine a state represents the capture of N instructions. The transition choices are:
When flushing the k instructions, what actually gets emitted is the peephole optimized version of those k. You can compute anything you want in emitting such instructions. You also need to remember "shift" the parameters for the remaining instructions appropriately.
This is all pretty easily implemented with a peephole optimizer state variable, and a case statement at each point where your code generator produces its next instruction. The case statement updates the peephole optimizer state and implements the transition operations.
Assume our machine is an augmented stack machine, has
PUSHVAR x
PUSHK i
ADD
POPVAR x
MOVE x,k
instructions, but the raw code generator generates only pure stack machine instructions, e.g., it does not emit the MOV instruction at all. We want the peephole optimizer to do this.
The peephole cases we care about are:
PUSHK i, PUSHK j, ADD ==> PUSHK i+j
PUSHK i, POPVAR x ==> MOVE x,i
Our state variables are:
PEEPHOLESTATE (an enum symbol, initialized to EMPTY)
FIRSTCONSTANT (an int)
SECONDCONSTANT (an int)
Our case statements:
GeneratePUSHK:
switch (PEEPHOLESTATE) {
EMPTY: PEEPHOLESTATE=PUSHK;
FIRSTCONSTANT=K;
break;
PUSHK: PEEPHOLESTATE=PUSHKPUSHK;
SECONDCONSTANT=K;
break;
PUSHKPUSHK:
#IF consumeEmitLoadK // flush state, transition and consume generated instruction
emit(PUSHK,FIRSTCONSTANT);
FIRSTCONSTANT=SECONDCONSTANT;
SECONDCONSTANT=K;
PEEPHOLESTATE=PUSHKPUSHK;
break;
#ELSE // flush state, transition, and reprocess generated instruction
emit(PUSHK,FIRSTCONSTANT);
FIRSTCONSTANT=SECONDCONSTANT;
PEEPHOLESTATE=PUSHK;
goto GeneratePUSHK; // Java can't do this, but other langauges can.
#ENDIF
}
GenerateADD:
switch (PEEPHOLESTATE) {
EMPTY: emit(ADD);
break;
PUSHK: emit(PUSHK,FIRSTCONSTANT);
emit(ADD);
PEEPHOLESTATE=EMPTY;
break;
PUSHKPUSHK:
PEEPHOLESTATE=PUSHK;
FIRSTCONSTANT+=SECONDCONSTANT;
break:
}
GeneratePOPX:
switch (PEEPHOLESTATE) {
EMPTY: emit(POP,X);
break;
PUSHK: emit(MOV,X,FIRSTCONSTANT);
PEEPHOLESTATE=EMPTY;
break;
PUSHKPUSHK:
emit(MOV,X,SECONDCONSTANT);
PEEPHOLESTATE=PUSHK;
break:
}
GeneratePUSHVARX:
switch (PEEPHOLESTATE) {
EMPTY: emit(PUSHVAR,X);
break;
PUSHK: emit(PUSHK,FIRSTCONSTANT);
PEEPHOLESTATE=EMPTY;
goto GeneratePUSHVARX;
PUSHKPUSHK:
PEEPHOLESTATE=PUSHK;
emit(PUSHK,FIRSTCONSTANT);
FIRSTCONSTANT=SECONDCONSTANT;
goto GeneratePUSHVARX;
}
The #IF shows two different styles of transitions, one that consumes the generated instruction, and one that does not; either works for this example. When you end up with a few hundred of these case statements, you'll find both types handy, with the "don't consume" version helping you keep your code smaller.
We need a routine to flush the peephole optimizer:
flush() {
switch (PEEPHOLESTATE) {
EMPTY: break;
PUSHK: emit(PUSHK,FIRSTCONSTANT);
break;
PUSHKPUSHK:
emit(PUSHK,FIRSTCONSTANT),
emit(PUSHK,SECONDCONSTANT),
break:
}
PEEPHOLESTATE=EMPTY;
return; }
It is interesting to consider what this peephole optimizer does with the following generated code:
PUSHK 1
PUSHK 2
ADD
PUSHK 5
POPVAR X
POPVAR Y
What this whole FSA scheme does is hide your pattern matching in the state transitions, and the response to matched patterns in the cases. You can code this by hand, and it is fast and relatively easy to code and debug. But when the number of cases gets large, you don't want to build such a state machine by hand. You can write a tool to generate this state machine for you; good background for this would be FLEX or LALR parser state machine generation. I'm not going to explain this here :-}