I need to implement a Sparse layer such that I can have the neurons in my Sparse layer only connected to specific neurons in the previous layer. For instance, I could have 200 neurons in my Sparse layer only connected to the even neurons in the previous layer and another 200 neurons in my Sparse layer only connected to the uneven neurons in the previous layer. The real use cases are more complex as they involve modeling the links between nodes on a graph, i.e. there's no obvious pattern.
I was thinking of simply making a variation othe Dense layer, where I supply a custom 'masking' matrix that is multiplied with the input before multiplying with the weights. However, that would still make all of the weights (incl. unused ones) part of self.trainable_weights. Similarly, I can imagine other issues with this approach. I am therefore wondering if somebody could give me an hint to a good way of implementing this?
I'll gladly submit the code in an PR once I'm done if anybody's interested.
Would it perhaps make sense to use http://deeplearning.net/software/theano/library/tensor/nnet/blocksparse.html ?
@PiranjaF I have a similar issue as yours. Have you solved it yet?
Most helpful comment
@PiranjaF I have a similar issue as yours. Have you solved it yet?