Convolution Neural Networks Intuition - Difference in outcome between high kernel filter size vs high number of features
问题 I wanted to understand architectural intuition behind the differences of: tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(28, 28, 1)) and tf.keras.layers.Conv2D(32, (7,7), activation='relu', input_shape=(28, 28, 1)) Assuming, As kernel size increases, more complex feature-pattern matching can be performed in the convolution step. As feature size increases, a larger variance of smaller features can define a particular layer. How and when (if possible kindly give scenarios) do