Before you cringe at the duplicate title, the other question wasn\'t suited to what I ask here (IMO). So.
I am really wanting to use virtual functions in my applicat
In my opinion, When there was less number of loops, may be there was no context switching, But when you increased the number of loops, then there are very strong chances that context switching takes place and that is dominating the reading. For example first program takes 1 sec and second program 3 secs, but if context switch takes 10 secs, then the difference is 13/11 instead of 3/1.
I believe that your test case is too artificial to be of any great value.
First, inside your profiled function you dynamically allocate and deallocate an object as well as call a function, if you want to profile just the function call then you should do just that.
Second, you are not profiling a case where a virtual function call represents a viable alternative to a given problem. A virtual function call provides dynamic dispatch. You should try profiling a case such as where a virtual function call is used as an alternative to something using a switch-on-type anti-pattern.
I think that this kind of testing is pretty useless, in fact:
1) you are wasting time for profiling itself invoking gettimeofday()
;
2) you are not really testing virtual functions, and IMHO this is the worst thing.
Why? Because you use virtual functions to avoid writing things such as:
<pseudocode>
switch typeof(object) {
case ClassA: functionA(object);
case ClassB: functionB(object);
case ClassC: functionC(object);
}
</pseudocode>
in this code, you miss the "if... else" block so you don't really get the advantage of virtual functions. This is a scenario where they are always "loser" against non-virtual.
To do a proper profiling, I think you should add something like the code I've posted.
There could be several reasons for the difference in time.
the heap manager may influence the result, because sizeof(VCS) > sizeof(VS)
. What happens if you move the new
/ delete
out of the loop?
Again, due to size differences, memory cache may indeed be part of the difference in time.
BUT: you should really compare similar functionality. When using virtual functions, you do so for a reason, which is calling a different member function dependent on the object's identity. If you need this functionality, and don't want to use virtual functions, you would have to implement it manually, be it using a function table or even a switch statement. This comes at a cost, too, and that's what you should compare against virtual functions.
With a small number of iterations there's a chance that your code is preempted with some other program running in parallel or swapping occurs or anything else operating system isolates your program from happens and you'll have the time it was suspended by the operating system included into your benchmark results. This is number one reason why you should run your code something like a dozen million times to measure anything more or less reliably.
When using too few iterations, there is a lot of noise in the measurement. The gettimeofday
function is not going to be accurate enough to give you good measurements for only a handful of iterations, not to mention that it records total wall time (which includes time spent when preempted by other threads).
Bottom line, though, you shouldn't come up with some ridiculously convoluted design to avoid virtual functions. They really don't add much overhead. If you have incredibly performance critical code and you know that virtual functions make up most of the time, then perhaps it's something to worry about. In any practical application, though, virtual functions won't be what's making your application slow.