If I call a virtual function 1000 times in a loop, will I suffer from the vtable lookup overhead 1000 times or only once?
I think that the problem is not vtable lookup since that's very fast operation especially in a loop where you have all required values on cache (if the loop is not too complex, but if it's complex then virtual function wouldn't impact performance a lot). The problem is the fact that compiler cannot inline that function in compile time.
This is especially a problem when virtual function is very small (e.g. returning only one value). The relative performance impact in this case can be huge because you need function call to just return a value. If this function can be inlined, it would improve performance very much.
If the virtual function is performance consuming, then I wouldn't really care about vtable.
The compiler may be able to optimise it - for example, the following is (at least conceptually) easliy optimised:
Foo * f = new Foo;
for ( int i = 0; i < 1000; i++ ) {
f->func();
}
However, other cases are more difficult:
vector <Foo *> v;
// populate v with 1000 Foo (not derived) objects
for ( int i = 0; i < v.size(); i++ ) {
v[i]->func();
}
the same conceptual optimisation is applicable, but much harder for the compiler to see.
Bottom line - if you really care about it, compile your code with all optimisations enabled and examine the compiler's assembler output.
Let's give it a try with g++ targeting x86:
$ cat y.cpp
struct A
{
virtual void not_used(int);
virtual void f(int);
};
void foo(A &a)
{
for (unsigned i = 0; i < 1000; ++i)
a.f(13);
}
$
$ gcc -S -O3 y.cpp # assembler output, max optimization
$
$ cat y.s
.file "y.cpp"
.section .text.unlikely,"ax",@progbits
.LCOLDB0:
.text
.LHOTB0:
.p2align 4,,15
.globl _Z3fooR1A
.type _Z3fooR1A, @function
_Z3fooR1A:
.LFB0:
.cfi_startproc
pushq %rbp
.cfi_def_cfa_offset 16
.cfi_offset 6, -16
pushq %rbx
.cfi_def_cfa_offset 24
.cfi_offset 3, -24
movq %rdi, %rbp
movl $1000, %ebx
subq $8, %rsp
.cfi_def_cfa_offset 32
.p2align 4,,10
.p2align 3
.L2:
movq 0(%rbp), %rax
movl $13, %esi
movq %rbp, %rdi
call *8(%rax)
subl $1, %ebx
jne .L2
addq $8, %rsp
.cfi_def_cfa_offset 24
popq %rbx
.cfi_def_cfa_offset 16
popq %rbp
.cfi_def_cfa_offset 8
ret
.cfi_endproc
.LFE0:
.size _Z3fooR1A, .-_Z3fooR1A
.section .text.unlikely
.LCOLDE0:
.text
.LHOTE0:
.ident "GCC: (GNU) 5.3.1 20160406 (Red Hat 5.3.1-6)"
.section .note.GNU-stack,"",@progbits
$
The L2 label is the top of the loop. The line right after L2 seems to be loading the vpointer into rax. The call 4 lines after L2 seems to be indirect, fetching the pointer to the f() override from the vstruct.
I'm surprised by this. I would have expected the compiler to treat the address of the f() override function as a loop invariant. It seems like gcc is making two "paranoid" assumptions:
Edit: In a separate compilation unit, I implemented A::f() and a main function with a call to foo(). I then built an executable with gcc using link-time optimization, and ran objdump on it. The virtual function call was inlined. So, perhaps this is why gcc optimization without LTO is not as ideal as one might expect.
For a study about the overhead of Virtual Function Calls i recommend the paper "The Direct Cost of Virtual Function Calls in C++"
If the compiler can deduce that the object on which you're calling the virtual function doesn't change, then, in theory, it should be able to hoist the vtable lookup out of the loop.
Whether your particular compiler actually does this is something you can only find out by looking at the assembly code it produces.
I would say this depends on your compiler as well as on the look of the loop. Optimizing compilers can do a lot for you and if the VF-call is predictable the compiler can help you. Maybe you can find something about the optimizations your compiler does in your compiler documentation.