My code requires continuously computing a value from the following function:
inline double f (double x) {
return ( tanh( 3*(5-x) ) *0.5 + 0.5);
}
I'm writing a code that continuously requires to compute a value from a particular function. After some profiling, I discovered that this part of my program is where most of the time is spent.
So far, I'm not allowed to use less than 100 MB, and I can use up to 2GB. A linear interpolation will be used for points between to points in the matrix.
If you would have huge lookup table (hundreds of MB as you said), which does not fit to cache - most likely memory lookup time would be much higher than calculation itself. RAM is "very slow", especially when fetching from random locations of huge arrays.
Here is synthetic test:
live demo
#include <boost/progress.hpp>
#include <iostream>
#include <ostream>
#include <vector>
#include <cmath>
using namespace boost;
using namespace std;
inline double calc(double x)
{
return ( tanh( 3*(5-x) ) *0.5 + 0.5);
}
template<typename F>
void test(F &&f)
{
progress_timer t;
volatile double res;
for(unsigned i=0;i!=1<<26;++i)
res = f(i);
(void)res;
}
int main()
{
const unsigned size = (1 << 26) + 1;
vector<double> table(size);
cout << "table size is " << 1.0*sizeof(double)*size/(1 << 20) << "MiB" << endl;
cout << "calc ";
test(calc);
cout << "dummy lookup ";
test([&](unsigned i){return table[(i << 12)%size];}); // dummy lookup, not real values
}
Output on my machine is:
table size is 512MiB
calc 0.52 s
dummy lookup 0.92 s