问题
I have built Tensorflow from source and I am using it's C API. So far everything works good, I am also using AVX / AVX2. My Tensorflow build from source was also built with XLA support. I now would like to also activate XLA (accelerated linear algebra) as I hope that it will once again increase the performance / speed during inference.
If I start my run right now I get this message:
2019-06-17 16:09:06.753737: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1541] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
On the official XLA homepage (https://www.tensorflow.org/xla/jit) I found this information about how to turn on jit on a session level:
# Config to turn on JIT compilation
config = tf.ConfigProto()
config.graph_options.optimizer_options.global_jit_level = tf.OptimizerOptions.ON_1
sess = tf.Session(config=config)
Here (https://github.com/tensorflow/tensorflow/issues/13853) it was explained how to set the TF_SetConfig in the C API. I was able to limit to one core before using the output of this Python code:
config1 = tf.ConfigProto(device_count={'CPU':1})
serialized1 = config1.SerializeToString()
print(list(map(hex, serialized1)))
I implemented it as follows:
uint8_t intra_op_parallelism_threads = maxCores; // for operations that can be parallelized internally, such as matrix multiplication
uint8_t inter_op_parallelism_threads = maxCores; // for operations that are independent in your TensorFlow graph because there is no directed path between them in the dataflow graph
uint8_t config[]={0x10,intra_op_parallelism_threads,0x28,inter_op_parallelism_threads};
TF_SetConfig(sess_opts,config,sizeof(config),status);
Therefore I thought this would help out for the XLA activation:
config= tf.ConfigProto()
config.graph_options.optimizer_options.global_jit_level = tf.OptimizerOptions.ON_1
output = config.SerializeToString()
print(list(map(hex, output)))
Implementation this time:
uint8_t config[]={0x52,0x4,0x1a,0x2,0x28,0x1};
TF_SetConfig(sess_opts,config,sizeof(config),status);
However XLA still seems to be deactivated. Can somebody help me out with this issue? Or, if you once again have a loot at the warning:
2019-06-17 16:09:06.753737: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1541] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
Does that mean I have to set XLA_FLAGS during the build?
Thanks in advance!
回答1:
Ok I figured out how to use XLA JIT, it is only available in the c_api_experimental.h Header. Just include this header and then use:
TF_EnableXLACompilation(sess_opts,true);
回答2:
@tre95 I have tried#include "c_api_experimental.h"
TF_SessionOptions* options = TF_NewSessionOptions();
TF_EnableXLACompilation(options,true);
but it compiled failed with error collect2: error: ld returned 1 exit status .However ,it can compile and run successfully if i do not do this.
来源:https://stackoverflow.com/questions/56633372/how-can-i-activate-tensorflows-xla-for-the-c-api