Yolov5: CoreML convert error

Created on 7 Jul 2020  ·  16Comments  ·  Source: ultralytics/yolov5

ONNX export success, saved as ./yolov5s.onnx

Starting CoreML export with coremltools 4.0b1...
WARNING:root:Tuple detected at graph output. This will be flattened in the converted model.
Converting Frontend ==> MIL Ops: 4%|▍ | 60/1415 [00:00<00:04, 303.59 ops/s]
CoreML export failure: PyTorch convert function for op leaky_relu_ not implemented

Stale bug

All 16 comments

Hello @mathpopo, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook Open In Colab, Docker Image, and Google Cloud Quickstart Guide for example environments.

If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom model or data training question, please note that Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:

  • Cloud-based AI systems operating on hundreds of HD video streams in realtime.
  • Edge AI integrated into custom iOS and Android apps for realtime 30 FPS video inference.
  • Custom data training, hyperparameter evolution, and model exportation to any destination.

For more information please visit https://www.ultralytics.com.

@mathpopo we have updated export.py a bit to better support the 3 export channels (ONNX, TorchScript and CoreML). See #251 for a tutorial on how to use export.py. Note that export.py is mainly a guide that provides simple first steps for users like yourself to begin creating your own export pipelines though, it does not provide end-to-end export functionality.

We do offer paid end-to-end export services however, as well as providing reference apps for iOS and Android to use your exported models in. If you have a business idea for YOLOv5 at the edge, we'd be happy to help you get started! You can email glenn.[email protected] for details if interested.

@mathpopo

Building from source the latest coremltools (https://github.com/apple/coremltools) will fix this issue.

It has been dealt with in https://github.com/apple/coremltools/commit/02ddf841c9b68b24918c88bd30eb7d22dbde7f34

@dlawrences I can't find the document to build coremltools from the source. Can you give me the link? Otherwise, I need to wait coremltools new version release.

Hi @imyoungyang

There are some details here...

You should do:

conda activate <your-python-environment>
cd to root of coremltools
mkdir build && cd build
cmake ../
make install
cp ../setup.py ./
python setup.py install

And then, you need to copy libcoremlpython.so from where you have built the package (in my case, /Users/laurentiudiaconu/Downloads/coremltools-source/coremltools/) to your environment coremltools installation (in my case, /Users/laurentiudiaconu/opt/miniconda3/envs/pytorch_15/lib/python3.7/site-packages/coremltools-4.0b1-py3.7.egg/coremltools/).

Hi

Could you please try installing the package from the build directory using pip
install .
?

Make sure to uninstall if you have already installed it or use pip install
--upgrade .

Thanks

În mar., 21 iul. 2020 la 13:14, joshgreifer notifications@github.com a
scris:

Hi @dlawrences https://github.com/dlawrences , following your
instructions above does not build (under Ubuntu 18.04) libcoremlpython.so,
but only builds libcaffeconverter.so, and does not resolve the issue for me.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/ultralytics/yolov5/issues/315#issuecomment-661767885,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AB3VVB24TQGEJXXY5Z36LI3R4VS6RANCNFSM4OSQTDXQ
.

--
Co-Founder @ innaite.tech
Team Lead, Advanced Analytics @ softelligence.net

Thanks, I deleted my comment, but not before you replied - I copied the python code from the coremltoolsdirectory to my conda env, and now all fine, and I've now successfully export my torch model to CoreML.

Thanks, I deleted my comment, but not before you replied - I copied the python code from the coremltoolsdirectory to my conda env, and now all fine, and I've now successfully export my torch model to CoreML.

Great @joshgreifer

Hi Everyone,
I'm trying to convert Yolov5s.pt model to CoreML model. The problem is, that I need to have Detect layer enabled, so I commented this line _https://github.com/ultralytics/yolov5/blob/7eaf225d558c6495190e0c79a56553633a065c49/models/export.py#L29_
But the Coremltools throws an exception
CoreML export failure: node 2321 (expand) got 3 input(s), expected 2
Without comment, it exports successfully but returns strange output, and it is unclear what to do with that output.
Is there any way to keep the Detect layer and convert it to CoreML?

Thanks.

I've used some suggestions from here #343 and implemented the Detect layer's inference function as post-processing.


def detect_layer_inf(self, x):
    z = []
    for i in range(self.nl):
        bs, _, ny, nx, _ = x[i].shape  # x(bs,255,20,20) to x(bs,3,20,20,85)

        if self.grid[i].shape[2:4] != x[i].shape[2:4]:
            self.grid[i] = self._make_grid(nx, ny).to(x[i].device)

        y = x[i].sigmoid()
        y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i].to(x[i].device)) * self.stride[i]  # xy
        y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i]  # wh
        z.append(y.view(bs, -1, self.no))

        return torch.cat(z, 1)

pred = model(img, augment=opt.augment)
detect_layer = model.model[-1]
pred = detect_layer_inf(detect_layer, pred)

Hope this will help.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

I've used some suggestions from here #343 and implemented the Detect layer's inference function as post-processing.


def detect_layer_inf(self, x):
    z = []
    for i in range(self.nl):
        bs, _, ny, nx, _ = x[i].shape  # x(bs,255,20,20) to x(bs,3,20,20,85)

        if self.grid[i].shape[2:4] != x[i].shape[2:4]:
            self.grid[i] = self._make_grid(nx, ny).to(x[i].device)

        y = x[i].sigmoid()
        y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i].to(x[i].device)) * self.stride[i]  # xy
        y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i]  # wh
        z.append(y.view(bs, -1, self.no))

        return torch.cat(z, 1)

pred = model(img, augment=opt.augment)
detect_layer = model.model[-1]
pred = detect_layer_inf(detect_layer, pred)

Hope this will help.

I have the same problem when export to mlmodel, but could you tell me where to add these code?

@hovhanns can you explain your solution a littler more.

@AlvinZheng @waheed0332

# This line gets the output from the exported model, 
pred = model(img, augment=opt.augment)
# This takes the last layer (Detect layer)
detect_layer = model.model[-1]
# This one calls the function that I wrote above and does the final inference.
pred = detect_layer_inf(detect_layer, pred)

All these things are done because of https://github.com/ultralytics/yolov5/blob/0ada058f6359b9c76569e9b501a5da7da0a11d74/models/export.py#L50 this line.
In fact, our model output changes after exporting, and the above-mentioned post-processing is necessary to have the right output.

Hi @hovhanns, what was your Vision request output after these changes, was it[VNCoreMLFeatureValueObservation] or[VNRecognizedObjectObservation].

Hi @maidmehic, The output was bounding boxes and classes only(Objective-C based array), then we added some logic there to draw the boxes on the taken picture.

Was this page helpful?
0 / 5 - 0 ratings