I am trying to run code on cpu using mxnet. I got a 256-D mx.ndrray , but when I try
to use asnumpy to transform it to numpy array, it takes a long time(about 60s). It is a
bug? Here is my code:
def cal_dis(img1, img2, args):
load_start_time = time.time()
with mx.Context(mx.cpu(0)):
_, model_args, model_auxs = mx.model.load_checkpoint(args.model_prefix, args.epoch)
symbol = lightened_cnn_b_feature()
model_args['data'] = mx.nd.array(np.array([img1, img2]), ctx)
exector = symbol.bind( args = model_args, args_grad = None, grad_req = "null", aux_states = model_auxs, ctx = ctx)
exector.forward(is_train = False)
#exector.outputs[0].wait_to_read()
output = exector.outputs[0].asnumpy()
forward and backward are async.
asnumpy() is waiting for the actual computation to finish
Is there a methd for solving this problem when I do forward computing?
seems asnumpy is inevitable if we want to get the result?
@BingzheWu So you might need waitToRead
@zihaolucky , I have tried waitToRead. It seems that this function also takes a long time to run.
I ran into the problem too. Is there any way to make it faster?
@BingzheWu For how long? I have to add this or the service would crashed after a while, but seems it's not too slow.
Note that the other calls forward and calculation are asynchourous, so the wait is actually to wait for these previous calls to finish, instead of the cost of copy
I called wait_to_read after each call of 'update', and my issue was solved.
Same issue here when calling asnumpy in metric computation.
This issue is closed due to lack of activity in the last 90 days. Feel free to ping me to reopen if this is still an active issue. Thanks!
Most helpful comment
forward and backward are async.
asnumpy() is waiting for the actual computation to finish