Softmax for >=2D tensors seems trivial in both theano and tensorflow using dimension -1 when maxing and summing, or just using the built-in softmax functions:
Nevertheless, here it is enforced that there are 2 or 3 dimensions when usign softmax. Why?
This is not an answer to your question, but be aware that Theano T.softmax uses dimension 1.
@gvtulder You are right, but still, Keras can use the code they provide in their documentation:
e_x = exp(x - x.max(axis=1, keepdims=True))
out = e_x / e_x.sum(axis=1, keepdims=True)
Where you can specify the desired axis.
Any info about this? I actually want to softmax a conv layer for pixelwise classification and Keras doesn't support it.
Also, when softmax 3-dim Keras assumes the shape is (batch, timesteps, classes), but what if I want to softmax 3-dim shape tensor unrelated to RNNs, will it also work? or does the function does something specific to RNNs?
I would be surprised if Keras its softmax is RNN-only. In fact, many convolutional networks end up with a 1D feature vector for each image sample, which is then passed to a softmax for classification (Inception, AlexNet, LeNet, and probably a lot more).
I think this is just a (possibly unintended) limitation that can be avoided.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs, but feel free to re-open it if needed.
Up. Are there any plans to add softmax for 3D tensors (like after convolution layer)?
Have you solved the use of softmax for feature maps?
Probably there is a Lambda(...) fix for it on Google, frankly I can't remember.
Most helpful comment
Any info about this? I actually want to softmax a conv layer for pixelwise classification and Keras doesn't support it.
Also, when softmax 3-dim Keras assumes the shape is
(batch, timesteps, classes), but what if I want to softmax 3-dim shape tensor unrelated to RNNs, will it also work? or does the function does something specific to RNNs?