1.5.0
I'm attempting to override the gradient computation of certain Ops, say ReLUs, in a loaded GraphModel. The gradient registry seems like the right place, but currently tf.getKernelsForBackend doesn't seem to return a ReLU kernel. It seems like most core Ops currently don't yet use the gradient registry.
Does my description here seem correct? Is there currently already a different way to override Op gradients of loaded models? If not, could and should I try porting the specific ops I'd like to be able to override to the new kernel and gradient registry system?
Thank you for all your work on TFJS!
@ludwigschubert thank you , can you please provide your use case.
Hi Ludwig, you are correct, not much is currently using the gradient registry (or even kernel registry). We've been doing some of the groundwork for using a registry approach for kernels, gradients etc. We are going to be doing a bunch more work in this direction this quarter (so there will be quite a few changes there soon).
All new ops will use the registry approach and old ones will be ported to use it. So PRs that do that will be welcome, but there will be some more moving parts there. Is it a lot of ops you need?
Exciting to hear! I mainly need ReLU and MaxPool atm. I'm experimenting on a fork right now to see if I can port those two ops to use the registry. I'm not sure I'll know enough about the inner workings of TFJS to make it PR-ready, but I'll try. If there's a commit/PR/branch that ports an existing op, I'd be glad about any pointers. (But no pressure, I'll explore on my own, too. :D)
Thanks for giving me extra context! :-)
@ludwigschubert can we close this issue ?