Hi @rykov8, I have a error problem.
When I try to train my own image dataset(follow SSD_training notebook), I face a error as follow:
832/1264 [==================>...........] - ETA: 634s - loss: 2.3086Exception in thread Thread-12:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(self.__args, *self.__kwargs)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 404, in data_generator_task
generator_output = next(generator)
File "/home/optsai/鏂囦欢/Object detection/test_training.py", line 197, in generate
img = jitter(img)
File "/home/optsai/鏂囦欢/Object detection/test_training.py", line 102, in contrast
gs = self.grayscale(rgb).mean() * np.ones_like(rgb)
File "/home/optsai/鏂囦欢/Object detection/test_training.py", line 86, in grayscale
return rgb.dot([0.299, 0.587, 0.114])
ValueError: shapes (300,300) and (3,) not aligned: 300 (dim 1) != 3 (dim 0)
Can you tell me how to fix it?? This is my code.
train.txt
hey man what was the solution of this error ?
any case your data is gray-scale?
if so you have to convert it to rgb ( channel => 3)
if you want to use the pre-trained model as is.
@MicBA Yes I realized it last night like you said, there are some grayscale images and I have to convert it , thanks a lot
@MicBA Thank you ! you saved me ! :) By the way, how did you guess it was a gray-scale problem ?
Most helpful comment
any case your data is gray-scale?
if so you have to convert it to rgb ( channel => 3)
if you want to use the pre-trained model as is.