1.3.1
Version 78.0.3904.87 (Official Build) (64-bit)
i've tried to set the backend to webgl but it has no effect on the performance.
im using an AutoML model which gives me results after appr. 5 sec. this is way to slow,
is there a way to accelerate, for example by setting the backend to webgl properly?
you can reproduce with this example
@gitunit thank you for reporting can you please provide your system configuration and browser information ?
cc @annxingyuan @dsmilkov
@gitunit You can run, tf.getBackend() to check if the webgl backend is enabled.
Also, FYI the first detection is slow since the backend are caching the shader programs, the prediction after the first one should much faster and stable.
@rthadur Chrome Version 78.0.3904.87 (Official Build) (64-bit) on a notebook with Windows 10.
@pyu10055 i see, it works. i didn't know the first prediction takes longer, therefore i haven't bothered trying another one after the first. now i can definitely see a difference between backend 'cpu' and 'webgl'. thanks for clarification.
now i wonder how i might improve the UX, so the user doesn't notice this initial warm-up time. maybe just using a test image on load and only 'start' the web experience after the first prediction? do you have any suggestions for this?
@gitunit after the model is loaded, you can run a warming up step, to feed the model prediction call with all zero tensor. tf.zeros([tensorShape])
@pyu10055 got it, thx. basically same as loading a dummy image (i haven't noticed any performance gain using tf.zeros([tensorShape]).