Hi! I have a question your Dataloader.
Your yolov5 is soo difficult to me to analyze..
How to visualize input Dataloader after Augmentation?
Before I training, I wonder how images come out in the dataloader.
Thanks!
Trust me, Glenn’s code is really easy to read and understand compared to many repo in ML.
You can check the images created in the folder ‘’runs/exp0/‘’ they will show you 3 first batches when starting a training. Else you could simply create a dataloader object in a notebook or something using the repo’s code and load image one after an other and view them however you want by writing to disk or showing them with pyplot
You have to look in the LoadImagesAndLabels(Dataset) class. This class return a tuple (torch.from_numpy(img), labels_out, self.img_files[index], shapes).
Here torch.from_numpy(img) is a tensor with shape [batch size, channels, w, h] and labels_out is a list of item. Each item of labels_out represents a bounding box annotation for each image and is a list of bounding box (x_center, y_center, w, h) corresponding to the object in the specific image. Because the bounding box coordinates (x_center, y_center, w, h) was normalized, so you have to multiply it with the width and height of the image to visualize it xc, yc, w, h = xc*w_img, yc*h_img, w*w_img, h*h_img
Visualization with batch size=6
We display this clearly in the notebook, suggest you start from there:
https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb#scrollTo=DLI1JmHU7B0l

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Hi @glenn-jocher, how can i add albumentations in torch library for augmentations in DataLoader?
@buimanhlinh96 you can customize the trainloader here:
https://github.com/ultralytics/yolov5/blob/7220cee1d1dc1f14003dbf8d633bbb76c547000c/utils/datasets.py#L328
@glenn-jocher, Could you please provide more details that how to use albumentations in the DataLoader??
@Auth0rM0rgan we don't have an integration with albumentations. YOLOv5 augmentation hyperparameters are set here:
https://github.com/ultralytics/yolov5/blob/f6b3c96966d963a4c69736c8691f068e5b554247/data/hyp.scratch.yaml#L22-L33
You are free to modify the dataloader also as you see fit here:
https://github.com/ultralytics/yolov5/blob/7220cee1d1dc1f14003dbf8d633bbb76c547000c/utils/datasets.py#L328
Hey @glenn-jocher, Thanks for the quick reply!
I know about the YOLOv5 augmentation but I'm trying to add some blur augmentations that are not in the yolov5 augmentation such as MotionBlur, Gaussian Blur, and ... but I have difficulty to add these augmentations inside YOLOv5. I will appreciate if you can help me even a little...
Gracias!
@Auth0rM0rgan ah I see. Well you can access the raw image as a numpy array img here after it's already passed through the YOLOv5 augmentations, so I would just insert any augmentations here. Be careful to correctly modify labels correspondingly as well if your augmentations require it though.
https://github.com/ultralytics/yolov5/blob/f6b3c96966d963a4c69736c8691f068e5b554247/utils/datasets.py#L526-L528
@glenn-jocher Thanks! made it work. Now I can apply a set of different kinds of augmentations at pixel-level and spatial-level!
@Auth0rM0rgan hey cool! Maybe you could consider submitting a PR for general Albumentations integration? I know its a popular tool so others might find that useful.
@glenn-jocher, Sure! will do it during the weekend!
Most helpful comment
Trust me, Glenn’s code is really easy to read and understand compared to many repo in ML.
You can check the images created in the folder ‘’runs/exp0/‘’ they will show you 3 first batches when starting a training. Else you could simply create a dataloader object in a notebook or something using the repo’s code and load image one after an other and view them however you want by writing to disk or showing them with pyplot