Transpose does not currently support tensors with more than 6 dimensions.
E mxnet.base.MXNetError: Traceback (most recent call last):
E File "src/operator/tensor/./matrix_op-inl.h", line 459
E MXNetError: Check failed: shp.ndim() <= 6 (7 vs. 6) : Transpose support at most 6 dimensions
Apply transpose to a higher-order tensor.
Python 3.7.3
MXNet 2.0.0
Thanks for reporting, Jean. I confirm that this affects numpy operators too.
Two changes will help immediately:
The root cause is really the limitation of template based programming in mshadow. Because of that choice, the axis is in template variable and thus needs to be expanded at compile time. We should move away from this approach and have a transpose implementation without mshadow instead.
Let's focus on only the immediate changes, and I will open a separate issue for the larger change.
Hi @szha , I am trying to fix it and select 'Collapse consecutive axes of the input array into a single axis here when axes.ndim() > 2'.
https://github.com/wkcn/incubator-mxnet/tree/support_transpose_dim_gt_6
Hi @szha , I create a TransposeKernel and need to copy the argument (namely axes) into the target context. Is there any good way to copy it?
I try to call get_space_typed to alloc a workspace, but it need to declare a temporary space in each operator which calls TransposeImpl.
I will create a new function TransposeExImpl to support up to 6 dimensions, which needs to alloc extra workspace. The original TransposeImpl will not be modified.
Most helpful comment
Two changes will help immediately:
The root cause is really the limitation of template based programming in mshadow. Because of that choice, the axis is in template variable and thus needs to be expanded at compile time. We should move away from this approach and have a transpose implementation without mshadow instead.
Let's focus on only the immediate changes, and I will open a separate issue for the larger change.