Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save sabyasachis/e0a60adde8ce3f1fd887d6f03baff2ae to your computer and use it in GitHub Desktop.
Save sabyasachis/e0a60adde8ce3f1fd887d6f03baff2ae to your computer and use it in GitHub Desktop.
convolution example with small (3x3) and large filter sizes (5x5)
Smaller Filter Sizes Larger Filter Sizes
Two 3x3 kernels result in an image size reduction by 4 one 5x5 kernel results in same reduction.
We have used (3x3 + 3x3) = 18 weights. We used (5x5) = 25 weights.
So, we get lower no. of weights but more layers. Higher number of weights but lesser layers.
Therefore, computationally efficient. And, this is computationally expensive.
With more layers, it learns complex, more non-linear features. With less layers, it learns simpler non linear features.
With more layers, it necessitates the need for larger memory. And, it will use less memory for backpropogation.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment