Pytorch-lightning: switch from LBFGS to ADAM optimizer during the training loop

Created on 26 Sep 2020  Â·  4Comments  Â·  Source: PyTorchLightning/pytorch-lightning

Is possible to show how we should write the "configure_optimizers" and "training_step" functions for the following code.
The purpose of the code is to switch the optimizer from LBFGS to Adam when the loss_SUM<0.3

optimizer = optim.LBFGS(model.parameters(), lr=0.003)
Use_Adam_optim_FirstTime=True
Use_LBFGS_optim=True

for epoch in range(30000):
    loss_SUM = 0
    for i, (x, t) in enumerate(GridLoader):
        x = x.to(device)
        t = t.to(device)
        if Use_LBFGS_optim:
          def closure():
            optimizer.zero_grad()
            lg, lb, li = problem_formulation(x, t, x_Array,t_Array,bndry,pi)
            loss_total=lg+ lb+ li
            loss_total.backward(retain_graph=True)
            return loss_total
          loss_out=optimizer.step(closure)
          loss_SUM+=loss_out.item()
        elif Use_Adam_optim_FirstTime:
          Use_Adam_optim_FirstTime=False
          optimizerAdam = optim.Adam(model.parameters(), lr=0.0003)
          model.load_state_dict(checkpoint['model'])
          optimizerAdam.zero_grad()
          lg, lb, li = problem_formulation(x, t, x_Array,t_Array,bndry,pi)
          lg.backward()
          lb.backward()
          li.backward()
          optimizerAdam.step()
          loss_SUM += lg.item()+lb.item()+li.item()
        else:
          optimizerAdam.zero_grad()
          lg, lb, li = problem_formulation(x, t, x_Array,t_Array,bndry,pi)
          lg.backward()
          lb.backward()
          li.backward()
          optimizerAdam.step()
          loss_SUM += lg.item()+lb.item()+li.item()  
    if loss_SUM<.3 and use_LBFGS_optim == True:
      Use_LBFGS_optim=False
      checkpoint = {'model': model.state_dict(),
                    'optimizer': optimizer.state_dict()}
question

All 4 comments

Hi! thanks for your contribution!, great first issue!

Hi
In your lightning module, you could do this:

def on_epoch_start(self):
    if self.loss_SUM > 0.3
        self.trainer.optimizers[0] = Adam(...)

and you start with LBFGS as default, returned in configure_optimizers.

I think this logic can now better be done in configure_optimizers itself in case someone has some crazy schedulers, or schedulers_dict as well and calling:

def on_epoch_start(self):
    if condition:
        self.trainer.accelerator_backend.setup_optimizers(self)

def configure_optimizers(self):
    if condition:
        return Adam(...)
    else:
        return LBFGS(...)
Was this page helpful?
0 / 5 - 0 ratings

Related issues

anthonytec2 picture anthonytec2  Â·  3Comments

as754770178 picture as754770178  Â·  3Comments

remisphere picture remisphere  Â·  3Comments

baeseongsu picture baeseongsu  Â·  3Comments

mmsamiei picture mmsamiei  Â·  3Comments