Yolov5: Small object detection and image sizes

Created on 12 Jun 2020  路  18Comments  路  Source: ultralytics/yolov5

Hello! Thank you for such a great implementation. Amazing inference performance!

I have a few questions that I would like some quick clarification on:
Imagine I have a data base of images of size 1980x1080

  1. When using train.py --> what does --img really do? Does it scale images and keep aspect ratio to then feed into the network at that given size and then calculate the amount of tiles based on stride and dimensions?

  2. does the --img take parameters [width,height] or [height,width]?

  3. If I trained a network using --img 1280 , what should I set my --img-size to when using detect.py ? 1280 as well?

  4. My assumption is that if I have images of 1980x1080 and I want to find small objects in each, I should then train my network with image size 1980 to retain image information correct?

  5. What do you recommend to make the anchors in the .yaml for detecting smaller objects? The model is already fantastic as finding small objects, but I am curious if there are any other tips you have on tweaking training parameters to find small objects reliably in images.

  6. Trying to use the --evolve arg ends up with an error:

Traceback (most recent call last):
File "train.py", line 440, in
results = train(hyp.copy())
File "train.py", line 201, in train
tb_writer.add_histogram('classes', c, 0)
AttributeError: 'NoneType' object has no attribute 'add_histogram'

Thank you in advance!

Stale enhancement

Most helpful comment

@Jacobsolawetz thanks! I've been meaning to get this done for a while now. To be honest, manually crunching anchors and then slotting them back into a model file is a pretty complicated task that can go wrong in a lot of places, so automating the process should remove those failure points.

And of course, I have a feeling poor anchor-data fits may be one of the primary reasons for people seeing x results in a paper, but then turning around to find y results on their custom dataset (where y << x).

Hopefully this will help bridge that gap in a painless way.

All 18 comments

Hello @mbufi, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook Open In Colab, Docker Image, and Google Cloud Quickstart Guide for example environments.

If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom model or data training question, please note that Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:

  • Cloud-based AI systems operating on hundreds of HD video streams in realtime.
  • Edge AI integrated into custom iOS and Android apps for realtime 30 FPS video inference.
  • Custom data training, hyperparameter evolution, and model exportation to any destination.

For more information please visit https://www.ultralytics.com.

@mbufi --img (which is short for --img-size) accepts two values, which are train and test sizes. If you supply one size it uses them for both, so for example:

python train.py --img 640 means that the mosaic dataloader pulls up the selected image along with 3 random images, resizes them all to 640, joins all 4 at the seams into a 1280x1280 mosaic, augments them, and then crops a center 640x640 area for placement as 1 image into the batch.

Training at native resolution will always produce the best results if your hardware/budget allows for it. Significantly different object sizes from the default anchors (as measured in pixels at your training --img) though would require you to modify the anchors as well for best results.

Training and inference should be paired at the same resolution for best results. If you plan on inference at 1980 train at 1980. If you plan on inference at 1024, train at that size. Just remember the anchors do not change size, they are fixed in pixel-space, so modify as appropriate if necesary.

We offer a hybrid kmeans-genetic evolution algorithm for anchor computation:
https://github.com/ultralytics/yolov5/blob/ad71d2d513dd1cce71b87b72af6c2685709549ad/utils/utils.py#L657-L662

@glenn-jocher Great! this all makes sense now:) Thank you so much for that great description.

With that said:

  1. That is super interesting. To make sure I understand - I have to use the kmean_anchors() separately before training to add to my .yaml correct? How does one acquire the coco128.txt?
  2. I am not 100% finished in reading through --evolve .. but I fail to run it. Is this a bug? Or am not using it correctly? I get the following error when using it:

Traceback (most recent call last):
File "train.py", line 440, in
results = train(hyp.copy())
File "train.py", line 201, in train
tb_writer.add_histogram('classes', c, 0)
AttributeError: 'NoneType' object has no attribute 'add_histogram'

Thank you again for sure dedication!

@mbufi you can optionally run kmean_anchors() if you feel your objects are not similar in size to the default anchors. You would do this before training, and then manually place the final generation of evolved anchors into your model.yaml file here:

https://github.com/ultralytics/yolov5/blob/ad71d2d513dd1cce71b87b72af6c2685709549ad/models/yolov5s.yaml#L6-L11

We have not tried to use --evolve in this repo yet, so I can't speak for it's status. In any case, this is a much more advanced offline feature (it is not part of training) which you would only try to run if default training is not producing results that are acceptable to you. It requires significant time and resources to produce results.

@glenn-jocher Awesome. That's what I figured... In the example in the code, where did you get the coco128.txt? What does that text file represent? Can I use the .yaml for this instead?

@mbufi there is no text file like this. You can create a custom dataset using coco128.yaml as a template:
https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data#1-create-datasetyaml

@glenn-jocher Yes, correct. I have my own customdata.yaml

The problem I am getting is using the kmeans() algo with my yaml. I know the yaml works because I have trained my own custom model already.

I am in the process of generating new anchors:

Python 3.6.9 (default, Apr 18 2020, 01:56:04)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.

from utils.utils import *
_ = kmean_anchors(path='/home/ai/yolov5/data/custom_data.yaml', img_size=(1280,960))
Traceback (most recent call last):
File "", line 1, in
File "/home/ai/yolov5/utils/utils.py", line 698, in kmean_anchors
dataset = LoadImagesAndLabels(path, augment=True, rect=True)
File "/home/ai/yolov5/utils/datasets.py", line 277, in __init__
assert n > 0, 'No images found in %s. See %s' % (path, help_url)
AssertionError: No images found in /home/ai/yolov5/data/camshaft.yaml. See https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data

I even try to run it with the coco128.yaml and the images and it still gives me the same error.

For reference from utils.py :
def kmean_anchors(path='./data/coco128.txt', n=9, img_size=(640, 640), thr=0.20, gen=1000):

@mbufi yes, this is possible since we have not actually updated this function for yolov5 yet. We will try to update it next week. In the meantime you may simply try to pass the directory of your training images as shown in the yaml:
https://github.com/ultralytics/yolov5/blob/ad71d2d513dd1cce71b87b72af6c2685709549ad/data/coco128.yaml#L11

TODO: Update kmeans_anchors() for v5

@glenn-jocher Okay. Great. Thanks for all your help!

Passing the directory directly worked for me:

kmean_anchors('./train/images', n = 9, img_size=[416,416], thr=4.0, gen=1000)

@Jacobsolawetz yes it works. I believe the latest commit allows you to pass the .yaml

Do you have a good understanding about the threshold with regards to small objects? I see you are using 4.0. Why is that?

All,

Kmeans has been updated, and AutoAnchor is now implemented. This means anchors are analyzed automatically and updated as necessary. No action is required on the part of the user, this is the new default behavior. You simply train normally as before to get this.

git pull or clone again to receive this update.

@glenn-jocher very nice

@Jacobsolawetz thanks! I've been meaning to get this done for a while now. To be honest, manually crunching anchors and then slotting them back into a model file is a pretty complicated task that can go wrong in a lot of places, so automating the process should remove those failure points.

And of course, I have a feeling poor anchor-data fits may be one of the primary reasons for people seeing x results in a paper, but then turning around to find y results on their custom dataset (where y << x).

Hopefully this will help bridge that gap in a painless way.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@glenn-jocher May I know if the anchor data gets saved into the yaml file after the auto Anchor is run? Need to know the anchors being used for further output processing.

@foochuanyue You may want to read the autoanchor output, which answers your question.

Screenshot 2020-10-16 at 12 33 52

@glenn-jocher ok! thanks!

Was this page helpful?
0 / 5 - 0 ratings

Related issues

we1pingyu picture we1pingyu  路  3Comments

abhiksark picture abhiksark  路  3Comments

cswwp picture cswwp  路  4Comments

linhaoqi027 picture linhaoqi027  路  4Comments

dereyly picture dereyly  路  4Comments