Tfjs: webgl backend to accelerate inference with AutoML model

Created on 12 Nov 2019  路  5Comments  路  Source: tensorflow/tfjs

TensorFlow.js version

1.3.1

Browser version

Version 78.0.3904.87 (Official Build) (64-bit)

Describe the problem or feature request

i've tried to set the backend to webgl but it has no effect on the performance.
im using an AutoML model which gives me results after appr. 5 sec. this is way to slow,
is there a way to accelerate, for example by setting the backend to webgl properly?

Code to reproduce the bug / link to feature request

you can reproduce with this example

autoML awaiting response buperformance

All 5 comments

@gitunit thank you for reporting can you please provide your system configuration and browser information ?
cc @annxingyuan @dsmilkov

@gitunit You can run, tf.getBackend() to check if the webgl backend is enabled.
Also, FYI the first detection is slow since the backend are caching the shader programs, the prediction after the first one should much faster and stable.

@rthadur Chrome Version 78.0.3904.87 (Official Build) (64-bit) on a notebook with Windows 10.

@pyu10055 i see, it works. i didn't know the first prediction takes longer, therefore i haven't bothered trying another one after the first. now i can definitely see a difference between backend 'cpu' and 'webgl'. thanks for clarification.
now i wonder how i might improve the UX, so the user doesn't notice this initial warm-up time. maybe just using a test image on load and only 'start' the web experience after the first prediction? do you have any suggestions for this?

@gitunit after the model is loaded, you can run a warming up step, to feed the model prediction call with all zero tensor. tf.zeros([tensorShape])

@pyu10055 got it, thx. basically same as loading a dummy image (i haven't noticed any performance gain using tf.zeros([tensorShape]).

Was this page helpful?
0 / 5 - 0 ratings