I am curious about the Tensorflow implementation of tf.nn.conv2d(...)
. To call it, one simply runs tf.nn.conv2d(...)
. However, I\'m going down the
TL;DR: The implementation of tf.nn.conv2d() is written in C++, which invokes optimized code using either Eigen (on CPU) or the cuDNN library (on GPU). You can find the implementation here.
The chain of functions that you mentioned in the question (from tf.nn.conv2d()
down) are Python functions for building a TensorFlow graph, but these do not invoke the implementation. Recall that, in TensorFlow, you first build a symbolic graph, then execute it.
The implementation of tf.nn.conv2d()
is only executed happens when you call Session.run() passing a Tensor
whose value depends on the result of some convolution. For example:
input = tf.placeholder(tf.float32)
filter = tf.Variable(tf.truncated_normal([5, 5, 3, 32], stddev=0.1)
conv = tf.nn.conv2d(input, filter, strides=[1, 1, 1, 1], padding='SAME')
result = sess.run(conv, feed_dict={input: ...}) # <== Execution happens here.
Invoking sess.run(...)
tells TensorFlow to run all the ops that are neeeded to compute the value of conv
, including the convolution itself. The path from here to the implementation is somewhat complicated, but goes through the following steps:
sess.run()
calls the TensorFlow backend to fetch the value of conv
.tensorflow::OpKernel
that corresponds to the convolution operator, by calling its Compute()
method.The "Conv2D"
OpKernel is implemented here, and its Compute()
method is here. Because this op is performance critical for many workloads, the implementation is quite complicated, but the basic idea is that the computation is offloaded to either the Eigen Tensor library (if running on CPU), or cuDNN's optimized GPU implementation.