Ignore the following reproducible and see the comment later.
import mxnet as mx
mx.npx.set_np()
net = mx.gluon.nn.Dense(16, in_units=16)
net.cast("float16")
net.initialize(ctx=mx.gpu())
net.hybridize()
net(mx.np.random.normal(0, 1, (16, 16), dtype=mx.np.float16, ctx=mx.gpu()))
Error:
MXNetError: Traceback (most recent call last):
File "../src/imperative/./imperative_utils.h", line 306
MXNetError: Check failed: outputs[i]->dtype() == out_types[i] (2 vs. 0) : 0-th output has invalid dtype. Expecting 0 got 2 in operator _npi_uniform
This should be changed to uniform_fn(-self.scale, self.scale, arr.shape, dtype=arr.dtype, out=arr)
@mk-61 This should also be related to AMP.
I think the following line should be added to convert the model to FP16.
net = net.cast("float16")
Forgot to add the cast in the example. The error I met is as follows:
import mxnet as mx
mx.npx.set_np()
net = mx.gluon.nn.Dense(16, in_units=16)
net.cast("float16")
net.initialize(ctx=mx.gpu())
net.hybridize()
net(mx.np.random.normal(0, 1, (16, 16), dtype=mx.np.float16, ctx=mx.gpu()))
Error:
MXNetError: Traceback (most recent call last):
File "../src/imperative/./imperative_utils.h", line 306
MXNetError: Check failed: outputs[i]->dtype() == out_types[i] (2 vs. 0) : 0-th output has invalid dtype. Expecting 0 got 2 in operator _npi_uniform
@kohillyang Sorry that I forgot the past the cast call when creating the issue, updated the code.
Hi @sxjscience I want to fix this issue. Please assign me. Thanks
Thanks for the contribution @AnshuTrivedi . You may try to fix the initializers and add a test case in https://github.com/apache/incubator-mxnet/blob/master/tests/python/unittest/test_numpy_gluon.py
@sxjscience i'm facing difficulty in test_numpy_gluon.
I think test is writen ,what changes have to make there?
Please help me.
Thanks @AnshuTrivedi and @szha . This is now closed.