Tfjs: Support uint8 dtype

Created on 19 Apr 2018  路  9Comments  路  Source: tensorflow/tfjs

Is this a desired feature?

core feature

Most helpful comment

I can prepare a patch for superficial support if that's ok with you?

All 9 comments

Yes, it is on our roadmap for doing quantized math in our shaders, but our shaders right now only work with floats. Curious if you had a different use case in mind?

Would you be opposed to adding uint8 now before having quantized shader support? Even if it offers no storage size benefit in WebGL, it would be good to be able to future proof code that wanted to handle uint8 tensors. EG a library that returned MNIST images.

Since doing math with uint8 is not possible yet, adding uint8 as a type is not the highest priority for now, but we want to eventually add it. Is it an option for you to store those images using native UInt8Array for now?

I can prepare a patch for superficial support if that's ok with you?

Starting with something small would be great. I'll be happy to review and get that in. Thanks!

@nsthorat do you want this issue to be a placeholder or someone else is working on it.

@rthadur we should keep this as a placeholder.

Does TFJS support uint8 quantization? I ask because it is one of the options in quantization:

tfjs.converters.save_keras_model(model, "q8",quantization_dtype=np.uint8)

but the results don't work properly. uint16 behaves correctly.

I'd like to see (u)int8 tensor support in order to take advantage of enhanced INT8 performance of NVIDIA's Tensor Cores when using tf.matMul().

Tensorflow.js doesn't take advantage of Tensor-Cores when Fp32 is used. :(

Was this page helpful?
0 / 5 - 0 ratings