I am teaching myself data science and something peculiar has caught my eyes. In a sample DNN tutorial I was working on, I found that the Keras layer.get_weights()
f
It might also be, that you are trying to get weights from layers, which don't have any weights. Let's say, that you've defined below model:
input = Input(shape=(4,))
hidden_layer_0 = Dense(4, activation='tanh')(input)
hidden_layer_1 = Dense(4, activation='tanh')(hidden_layer_0)
output = Lambda(lambda t: l2_normalize(100000*t, axis=1))(hidden_layer_1)
model = Model(input, output)
and want to print weights of each layer (after building/training it previously). You can do this as follows:
for layer in model.layers:
print("===== LAYER: ", layer.name, " =====")
if layer.get_weights() != []:
weights = layer.get_weights()[0]
biases = layer.get_weights()[1]
print("weights:")
print(weights)
print("biases:")
print(biases)
else:
print("weights: ", [])
If you run this code, you will get something like this:
===== LAYER: input_1 =====
weights: []
===== LAYER: dense =====
weights:
[[-6.86365739e-02 2.24897027e-01 ... 1.90570995e-01]]
biases:
[-0.02512692 -0.00486927 ... 0.04254978]
===== LAYER: dense_1 =====
weights:
[[-6.86365739e-02 2.24897027e-01 ... 1.90570995e-01]]
biases:
[-0.02512692 0.00933884 ... 0.04254978]
===== LAYER: lambda =====
weights: []
As you can see, first (Input) and the last (Lambda) layers don't have any weights.