I hope this question isn\'t to stupid cause it may seem obvious. As I\'m doing a little research on Buffer overflows I stumble over a simple question:
After going to a n
While the CPU is processing instructions, it does increment eip by appropriate size of last executed instruction automatically (unless overridden by one of those jmp/j[condition]/call/ret/int/... instructions).
That's what i wanted to know.
I'm well aware that thers more Stuff arround (NX Bit, Pipelining ect).
Thanks everybody for their replys
A bit boringly extended explanation (saying the same as those comments):
CPU has special purpose register instruction pointer eip
, which points to the next instruction to execute.
A jmp
, call
, ret
, etc. ends internally with something similar to:
mov eip,<next_instruction_address>
.
While the CPU is processing instructions, it does increment eip
by appropriate size of last executed instruction automatically (unless overridden by one of those jmp
/j[condition]
/call
/ret
/int
/... instructions).
Wherever you point the eip
(by whatever means), CPU will try it's best to execute content of that memory as next instruction opcode(s), not aware of any context (where/why did it come from to this new eip
). Actually this amnesia sort of happens ahead of each instruction executed (I'm silently ignoring the modern internal x86 architecture with various pre-execution queues and branch predictions, translation into micro instructions, etc... :) ... all of that are implementation details quite hidden from programmer, usually visible only trough poor performance, if you disturb that architecture much by jumping all around mindlessly). So it's CPU, eip
and here&now, not much else.
note: some context on x86 can be provided by defining the memory layout by supervising code (like OS), ie. marking some areas of memory as non-executable. CPU detecting it's eip
pointing to such area will signal a failure, and fall into "trap" handler (usually managed by OS also, killing the interfering process).
The call
instruction saves (onto the stack) the address to the instruction after it onto the stack. After that, it simply jumps. It doesn't explicitly tell the cpu to look for a return
instruction, since that will be handled by popping (from the stack) the return address that call
saved in the first place. This allows for multiple calls and returns, or to put it simply, nested calls.