I have a shared machine with 64 cores on which I have a big pipeline of Keras functions that I want to run. The thing is that it seems that Keras automatically uses all the
As @Yu-Yang suggested, I used this line before each fit I do :
from keras import backend as K
K.set_session(K.tf.Session(config=K.tf.ConfigProto(intra_op_parallelism_threads=32, inter_op_parallelism_threads=32)))
Check the CPU usage (htop) :
As mentioned in this solution, (https://stackoverflow.com/a/54832345/5568660)
if you want to use this using Tensforflow or Tensorflow_gpu, you can directly use the tf.config and feed it to the session:
config = tf.ConfigProto(intra_op_parallelism_threads=32,
inter_op_parallelism_threads=32,
allow_soft_placement=True)
session = tf.Session(config=config)