Mmdetection: Learning rate scheduler

Created on 26 Jun 2019  路  1Comment  路  Source: open-mmlab/mmdetection

Hi, I noticed that with default configuration like:

optimizer = dict(type='SGD', lr=0.005, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))
lr_config = dict(        
    policy='step',       
    warmup='linear',     
    warmup_iters=500,    
    warmup_ratio=1.0 / 3,
    step=[16, 21])    

the learning rate will decrease to 1/10 when training reaches a plateau (several epochs/iters?).
Now I want to change those settings, like in pytorch we have 'patience' or so. What is the api name for that ? Are there any documentation I can refer to? Thanks!

Most helpful comment

In mmcv/runner/runner.py I saw hook_name = lr_config['policy'].title() + 'LrUpdaterHook'
and in mmcv/mmcv/runner/hooks/lr_updater.py I saw things like class FixedLrUpdaterHook(LrUpdaterHook): and class StepLrUpdaterHook(LrUpdaterHook)
Thus, learning rate planner in configuration file of mmd should be like
lr_config = dict(
policy='step', #Here, could alter to fixed, step, exp, poly, inv, cosine
warmup='linear',
warmup_iters=500,
warmup_ratio=1.0 / 3,
step=[16, 21])

>All comments

In mmcv/runner/runner.py I saw hook_name = lr_config['policy'].title() + 'LrUpdaterHook'
and in mmcv/mmcv/runner/hooks/lr_updater.py I saw things like class FixedLrUpdaterHook(LrUpdaterHook): and class StepLrUpdaterHook(LrUpdaterHook)
Thus, learning rate planner in configuration file of mmd should be like
lr_config = dict(
policy='step', #Here, could alter to fixed, step, exp, poly, inv, cosine
warmup='linear',
warmup_iters=500,
warmup_ratio=1.0 / 3,
step=[16, 21])

Was this page helpful?
0 / 5 - 0 ratings