Now I have two types of data, one is benign thyroid data, the other is malignant thyroid data. They are labeled as mask maps. A map only represents one lesion. Do I need to classify my data according to benign and malignant, and then use Mask Rcnn training?
If not, how does the model know which of my data is benign and which is malignant?
@disign-ming
If I have two types of data now, one is benign and the other is malignant. Each image is only a benign or malignant one. If I want to use maskrcnn for training, how can I classify it?? It is necessary to store benign data in a folder, malicious data is stored in another folder, for example, I store benign data in the benign folder, malicious data is stored in the malignant folder, and then I add categories. Time should be executed
self.add_class("benign", 1, "benign")
self.add_class("malignant", 2, "malignant"),
Is this the case?
If you want results that show whether a thyroid is benign or malignant, then yes you should classify them separately. If you were only looking for thyroids in general, they would be combined into the same class.
I have found the following resources helpful in building my own dataset for matterport's Mask RCNN:
@freezurbern
Ok, I will look at it. What I don鈥檛 really understand is how to use mask to label data, and multi-class detection and segmentation through maskrcnn.
Masks label objects in the images that you want the model to find during inference. Masks are a way of annotating your training dataset to separate objects from their background. A mask is a binary representation of one object. You can think of the background and all other areas as False and the object you want to detect as True.
We use classes to group similar masks.
For example, if you were trying to detect cats and dogs, you would have the choice of using the label animals (includes both cats and dogs) or more specific labels like cat and dog. Labeling them specifically allows the model to tell you which is which.
As for creation of masks, you will want to use some of the links above.
@freezurbern
Thank you very much for your reply. I have four data in the test.zip file I uploaded. If they are benign and malignant lesion data, I want to classify the two lesions and segment the benign lesions and malignant lesions. Using maskrcnn first needs to classify the data, so how do I classify the two types of data?
I refer to the example https://github.com/matterport/Mask_RCNN/tree/master/samples/nucleus, but it only has one class, I don't know how I should do multiple classifications?
Can you trouble me anymore?
test.zip
Multiple classifications is more like the shapes example:
https://github.com/matterport/Mask_RCNN/blob/master/samples/shapes/train_shapes.ipynb
You need to convert your annotations to a COCO-like JSON file, which includes a place for a classification. See here for an example: https://github.com/waspinator/pycococreator/blob/8d30e1845c4f136b1830edac11413a234a137bef/examples/shapes/shapes_to_coco.py from the tutorial https://patrickwasp.com/create-your-own-coco-style-dataset/
the tutorial at http://www.immersivelimit.com/tutorials/create-coco-annotations-from-scratch is also relevant.
The key difference between your use and the Mask R-CNN shapes example is you have images, they are creating them on the fly.
@freezurbern
OK, thank you for your guidance. I think I need to learn to understand what you said.
@zhouhao-learning Have you been able to solve this problem yet?
@waleedka @freezurbern
I have a similar problem and I can't get my head around it after several reads.
I have two classes now, 'good' and 'defective' like @zhouhao-learning example. This time, a single image can contain both classes; one good and the other is defective (all in the same image). This means that each image can either have both classes or a scenario where each image has just one class. If I want to use maskrcnn for training, how can I classify it?? I have a json file of my training set that stores the class for each image and it's segmentation mask. The format of the json file follows the COCO format.
Before, I succesfully trained the dataset but the object instances/predictions showed as one class. I am unable to distinguish between good instances and bad instannces. Inference was reported for only detection AP and segmentation AP. How can I have seperate AP results for each class? Can someone please help?
@Eldad27 In my experience, I had to split my images so that one image contained one class. I did this by "cutting" the classes out.
Here's an example:
I want to find red squares and blue circles. I have an image with two red squares and three blue circles. I annotate them with their respective classes. Next, I split them so my training dataset has two images with only red squares and three images with only blue circles.
@freezurbern Thanks a lot. I'll try working around that
@freezurbern by splitting you meant cutting the images. So, if i have one image with 256x256, you end up with two image with 256x128 and 256x128 (suppose cut is done in the midle)?
Another solution can be to keep the whole image, but duplicate the image with each image having its annotation file its category.
@jpainam
I'm not sure if my use was correct, but here is what I did:
Image:

Annotations:


The annotations and the source images are associated with each other and are the same dimensions.