Yolov5: Training stop when load_image throw error

Created on 22 Jun 2020  路  5Comments  路  Source: ultralytics/yolov5

Step to reproduce
1.start train
2.have "0" size image.
or
1.start train

  1. have missing label for images
    in those cases the train is stoped with error :
    data = [self.dataset[idx] for idx in possibly_batched_index] File "/app/workspace/utils/datasets.py", line 451, in __getitem__ img, labels = load_mosaic(self, index) File "/app/workspace/utils/datasets.py", line 574, in load_mosaic img, _, (h, w) = load_image(self, index) File "/app/workspace/utils/datasets.py", line 536, in load_image assert img is not None, 'Image Not Found ' + path AssertionError: Image Not Found /app/workspace/data/BBFashion/images/5e4118b37ddc9d75987fa7f3.jpg
    why the image is not just getting "skipped" ?
bug

All 5 comments

Hello @Matanelc, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook Open In Colab, Docker Image, and Google Cloud Quickstart Guide for example environments.

If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom model or data training question, please note that Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:

  • Cloud-based AI systems operating on hundreds of HD video streams in realtime.
  • Edge AI integrated into custom iOS and Android apps for realtime 30 FPS video inference.
  • Custom data training, hyperparameter evolution, and model exportation to any destination.

For more information please visit https://www.ultralytics.com.

I also have the same error, even though none of my images are nonetype nor my labels are empty.

@Matanelc I'm not sure why I have to explain this to you, but 0 size images will obviously not work correctly. There is no bug.

@Matanelc I'm not sure why I have to explain this to you, but 0 size images will obviously not work correctly. There is no bug.

I know that in darknet the training script just ignores bad annotations and bad images.
The bottom line when dealing with large data there is always a chance of several images going bad.

Thus a training script should be robust and handle these errors instead of making the training fail after a long time of running.

It was also a problem here:

195

and about the response above
This is open source, people will open bugs because this is how a community works.
You should either grow up and embrace it or take the repo down.

@doronAtuar facilitating silent errors is never best practices.

https://zen-of-python.info/errors-should-never-pass-silently.html
Errors should never pass silently. Unless explicitly silenced.
Just because programmers often ignore error messages doesn鈥檛 mean the program should stop emitting them. Silent errors can happen when functions return error codes or None instead of raising exceptions. These two aphorisms tell us that it鈥檚 better for a program to fail fast and crash than to silence the error and continue running the program. The bugs that inevitably happen later on will be harder to debug since they are far removed from the original cause. Though you can always choose to explicitly ignore the errors your programs cause, just be sure you are making the conscious choice to do so.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

dereyly picture dereyly  路  4Comments

Alex-afka picture Alex-afka  路  3Comments

xinxin342 picture xinxin342  路  3Comments

FSNStefan picture FSNStefan  路  4Comments

lisa676 picture lisa676  路  3Comments