Keras: Who is using shared_node/shared_layer/Siamese, and what for?

Created on 27 Mar 2016  路  13Comments  路  Source: keras-team/keras

Quick survey of who is using layer weight sharing, for the purpose of figuring out how to move on with backwards compatibility with the new Keras version. Are you using:

  • layers.core.add_shared_layer?
  • graph.add_shared_node?
  • Siamese layer in a different context?

Thanks.

stale

All 13 comments

I use graph.add_shared_node exclusively...

I wrote all three of it, but never used them. Does that make me a hypocrite? :)

I use graph.add_shared_node

I wrote all three of it, but never used them.

Really? Then why did you write them?

Current plans are to support Graph (although it is deprecated), including add_shared_node. Siamese is almost certainly getting removed entirely with no legacy support. Unsure yet what to do with add_shared_layer, waiting for more replies.

I use layers.core.add_shared_layer

Really? Then why did you write them?

@fchollet Several users were asking for that. I answered several tickets about how hack a solution for that. @farizrahman4u saved us all writing those layers.

Really? Then why did you write them?

There were other people in need. There were serious discussions on weight tying going on at that time. Also, it was a mental exercise for me.

Current plans are to support Graph (although it is deprecated), including add_shared_node. Siamese is almost certainly getting removed entirely with no legacy support. Unsure yet what to do with add_shared_layer, waiting for more replies.

But what about parallel graphs with shared parameters? If we want to do metric learning we need some sort of Siamese support. In other words, we need to apply the same layer twice with different inputs, possibly just changing batch normalizations.

My PhD research uses that. Here is another example of that: https://devblogs.nvidia.com/parallelforall/understanding-aesthetics-deep-learning/
Although they use Torch for development and Keras for deploy, I believe they could use Keras all the way if provide an easy API to do that.

I use layers.core.add_shared_layer & graph.add_shared_node, but wouldn't mind if they're gone as long as layers/containers keep being __call__'able.

I use add_shared_layer() in Sequential()

I use add_shared_node() but i don't mind at all rewriting some of mine modules for a more polished API.

I use Graph.add_shared_node exclusively. I had to write additional support for getting masks from shared Embedding layers as well, so if the new functional API handles masking inherently, that would save a lot of effort.

Was this page helpful?
0 / 5 - 0 ratings