Tensorflow - Why are there so many similar or even duplicate functions in tf.nn and tf.layers / tf.losses / tf.contrib.layers etc?

前端 未结 1 1616
無奈伤痛
無奈伤痛 2020-12-29 10:25

In Tensorflow (as of v1.2.1), it seems that there are (at least) two parallel APIs to construct computational graphs. There are functions in tf.nn, like conv2d, avg_pool, re

相关标签:
1条回答
  • 2020-12-29 11:04

    Tensorflow proposes on the one hand a low level API (tf., tf.nn....), and on the other hand, a higher level API (tf.layers., tf.losses.,...).

    The goal of the higher level API is to provide functions that greatly simplify the design of the most common neural nets. The lower level API is there for people with special needs, or who wishes to keep a finer control of what is going on.

    The situation is a bit confused though, because some functions have the same or similar names, and also, there is no clear way to distinguish at first sight which namespace correspond to which level of the API.

    Now, let's look at conv2d for example. A striking difference between tf.nn.conv2d and tf.layers.conv2d is that the later takes care of all the variables needed for weights and biases. A single line of code, and voilà, you just created a convolutional layer. With tf.nn.conv2d, you have to take declare the weights variable yourself before passing it to the function. And as for the biases, well, they are actually not even handled: you need to add them yourself later.

    Add to that that tf.layers.conv2d also proposes to add regularization and activation in the same function call, you can imagine how this can reduce code size when one's need is covered by the higher-level API.

    The higher level also makes some decisions by default that could be considered as best practices. For example, losses in tf.losses are added to the tf.GraphKeys.LOSSES collection by default, which makes recovery and summation of the various component easy and somewhat standardized. If you use the lower level API, you would need to do all of that yourself. Obviously, you would need to be careful when you start mixing low and high level API functions there.

    The higher-level API is also an answer to a great need from people that have been otherwise used to similarly high-level function in other frameworks, Theano aside. This is rather obvious when one ponders the number of alternative higher level APIs built on top of tensorflow, such as keras 2 (now part of the official tensorflow API), slim (in tf.contrib.slim), tflearn, tensorlayer, and the likes.

    Finally, if I may add an advice: if you are beginning with tensorflow and do not have a preference towards a particular API, I would personnally encourage you to stick to the tf.keras.* API:

    • Its API is friendly and at least as good as the other high-level APIs built on top of the low-level tensorflow API
    • It has a clear namespace within tensorflow (although it can -- and sometimes should -- be used with parts from other namespaces, such as tf.data)
    • It is now a first-class citizen of tensorflow (it used to be in tf.contrib.keras), and care is taken to make new tensorflow features (such as eager) compatible with keras.
    • Its generic implementation can use other toolkits such as CNTK, and so does not lock you to tensorflow.
    0 讨论(0)
提交回复
热议问题