Tiled Convolutional layer was introduced in this paper from NIPS 2010: http://papers.nips.cc/paper/4136-tiled-convolutional-neural-networks.pdf. Per the authors, it provides scale and rotational invariance which can be tweaked with the tiling parameter (k, in the paper). This can be a useful alternative to augmenting your data and thereby increasing its size.
Excerpt from the paper:
tiled convolution neural networks ... use a regular “tiled” pattern of tied weights that does not require that adjacent hidden units share identical weights, but instead requires only that hidden units k steps away from each other to have tied weights. By pooling over neighboring units, this architecture is able to learn complex invariances (such as scale and rotational invariance) beyond translational invariance.
I'm not sure if this can be trained like a regular CNN.
The code would require some work but the API won't need much modification. The 'tiling parameter' can be added cleanly to the Covolution1D and Convolution2D classes.
Thoughts and comments are welcome!
Thanks for all the great work on Keras!
-Angad
Stop the bot ;) and let it live/open.
Would be great to add this feature!
Most helpful comment
Stop the bot ;) and let it live/open.
Would be great to add this feature!