When performance is essential to an application, should consideration be given whether to declare an array on the stack vs the heap? Allow me to outline why this question has co
The usual way of implementing a 2 dimensional array in C++
would be to wrap it in a class, using std::vector
, and
have class accessors which calculate the index. However:
Any questions concerning optimization can only be answered by measuring, and even then, they are only valid for the compiler you are using, on the machine on which you do the measurements.
If you write:
int array[2][3] = { ... };
and then something like:
for ( int i = 0; i != 2; ++ i ) {
for ( int j = 0; j != 3; ++ j ) {
// do something with array[i][j]...
}
}
It's hard to imagine a compiler which doesn't actually generate something along the lines of:
for ( int* p = array, p != array + whatever; ++ p ) {
// do something with *p
}
This is one of the most fundamental optimizations around, and has been for at least 30 years.
If you dynamically allocate as you propose, the compiler will not be able to apply this optimization. And even for a single access: the matrix has poorer locality, and requires more memory accesses, so would likely be less performant.
If you're in C++, you would normally write a Matrix
class,
using std::vector
for the memory, and calculating the
indexes explicitly using multiplication. (The improved locality
will probably result in better performance, despite the
multiplication.) This could make it more difficult for the
compiler to do the above optimization, but if this turns out to
be an issue, you can always provide specialized iterators for
handling this one particular case. You end up with more
readable and more flexible code (e.g. the dimensions don't have
to be constant), at little or no loss of performance.