I am looking through the Caffe prototxt for deep residual networks and have noticed the appearance of a \"Scale\"
layer.
layer {
bottom: \"r
You can find a detailed documentation on caffe here.
Specifically, for "Scale"
layer the doc reads:
Computes a product of two input Blobs, with the shape of the latter Blob "broadcast" to match the shape of the former. Equivalent to tiling the latter Blob, then computing the elementwise product.
The second input may be omitted, in which case it's learned as a parameter of the layer.
It seems like, in your case, (single "bottom"), this layer learns a scale factor to multiply "res2b_branch2b"
. Moreover, since scale_param { bias_term: true }
means the layer learns not only a multiplicative scaling factor, but also a constant term. So, the forward pass computes:
res2b_branch2b <- res2b_branch2b * \alpha + \beta
During training the net tries to learn the values of \alpha
and \beta
.