What is the top-level directory of the model you are using:
Tensorflow Object Detection API
Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
No, but I adjust the config file.
OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
Linux Ubuntu 16.04
TensorFlow installed from (source or binary):
Binary
TensorFlow version (use command below):
1.8.0
Bazel version (if compiling from source):
X
CUDA/cuDNN version:
CUDA 9.0
cuDNN 7.0
GPU model and memory:
GeForce GTX 1080, 8G memory
Exact command to reproduce:
python ../train.py
--logtostderr
--pipeline_config_path=../samples/configs/ssd_inception_v2_for_report.config
--train_dir=../results/ssd-incep2/
--gpu 0
From this example
,
and my experiment,
cannot we use adam_optimizer as defined in Tensorflow function?
Thank you!
X
If you look at the tests and the implementation of the optimizer builder, the optimizer is created using tf.train.AdamOptimizer. The constant_learning_rate field in the proto specifies the learning rate passed into the learning_rate argument of tf.train.AdamOptimizer (i.e. the learning rate isn't actually constant).
Hello @k-w-w ,
Thank you for the quick response.
From this article and the paper
I thought the learning rate in Adam will decay.
However,
from AdamOptimizer,
the learning rate won't decay.
So that's why we use constant learning rate here?
Thank you.
Oh I see, so the goal is to decay the learning rate itself. You can try passing exponential_decay_learning_rate (see definition here) instead of constant_learning_rate to the learning_rate field.
in the object_detection/protos only 3 optimizers are supported (Adam,Momentum,RMS-Prop)
As for my understanding Adam is the most advanced. If this is true why is it not used in any model of the model_zoo? All Faster RCNN and Mask RCNN Models use Momentum (which is, what i thought, outdated) So why not use Adam?
Did anybody ever use Adam instead and check the results?
Hello @k-w-w
Sorry for the late response.
I will give exponential_decay_learning_rate with Adam and let you know ASAP.
Closing as this has been answered and in "awaiting response" status for more than a week. Please add comments if any, we will reopen it. Thanks !
Hello @Harshini-Gadige @k-w-w , I have tried to use rms_optimizer with exponential decay learning rate, but despite that, when I check the learning rate graph on tensorboard, I don't see a constant line. This is what I have used in the config fille:
optimizer {
rms_prop_optimizer {
learning_rate {
exponential_decay_learning_rate {
initial_learning_rate: 0.00400000018999
decay_steps: 800720
decay_factor: 0.949999988079
}
}
momentum_optimizer_value: 0.899999976158
decay: 0.899999976158
epsilon: 1.0
}
}
I have also tried to use adamoptimizer along with exponential decay, but I still got the same constant learning rate graph on tensorboard. Here is what I used with adam optimizer:
optimizer {
adam_optimizer: {
learning_rate: {
exponential_decay_learning_rate {
initial_learning_rate: 0.00400000018999
decay_steps: 800720
decay_factor: 0.949999988079
}
}
}
}
Could you please tell me how can I achieve an exponential decay learning rate??
I used the following settings with adam optimizer and also always get a constant learning rate during training.
Shouldn't the learning rate decrease over time?
optimizer {
adam_optimizer {
learning_rate {
exponential_decay_learning_rate {
initial_learning_rate: 0.0002
decay_steps: 800720
decay_factor: 0.97
}
}
}
}
According to your config, learning rate will decrease every +800k steps, because of this, you will no see any change until that, try to choose a lower value of decay_steps like 1000
Most helpful comment
in the object_detection/protos only 3 optimizers are supported (Adam,Momentum,RMS-Prop)
As for my understanding Adam is the most advanced. If this is true why is it not used in any model of the model_zoo? All Faster RCNN and Mask RCNN Models use Momentum (which is, what i thought, outdated) So why not use Adam?
Did anybody ever use Adam instead and check the results?