Pytorch-lightning: auto_lr_find does not work

Created on 28 May 2020  ·  10Comments  ·  Source: PyTorchLightning/pytorch-lightning

🐛 Bug

I am using auto_lr_find feature as below.
trainer = pl.Trainer(fast_dev_run=False, gpus=1, auto_lr_find=True)

My model has the self.learning_rate parameter as below (part of the model).

class TweetSegment(pl.LightningModule):
    def __init__(self, config, lr=3e-5):
        super(TweetSegment, self).__init__()
        self.bert = BertModel.from_pretrained('bert-base-uncased', config=config)
        self.drop_out = nn.Dropout(0.1)
        self.fullyConnected = nn.Sequential(nn.Linear(2*768, 2), nn.ReLU())
        self.learning_rate = lr
        self._init_initial()

    def configure_optimizers(self):
        return torch.optim.AdamW(self.parameters(), lr=self.learning_rate)   

When I 'fit' using below line
trainer.fit(tweetModel, train_dataloader=training_loader, val_dataloaders=valid_loader)
I still get the error
MisconfigurationException: When auto_lr_find is set to True, expects that hparams either has fieldlrorlearning_ratethat can overridden

Expected behavior

No error while running the 'fit'

Environment

  • CUDA:

    • GPU:



      • Tesla P100-PCIE-16GB



    • available: True

    • version: 10.1

  • Packages:

    • numpy: 1.18.1

    • pyTorch_debug: False

    • pyTorch_version: 1.5.0

    • pytorch-lightning: 0.7.6

    • tensorboard: 2.1.1

    • tqdm: 4.45.0

  • System:

    • OS: Linux

    • architecture:



      • 64bit


      • -


    • processor: x86_64

    • python: 3.7.6

    • version: #1 SMP Wed May 6 00:27:44 PDT 2020

Priority P0 help wanted

Most helpful comment

I get the same error, while having hparams.lr, even with 0.8.0.

All 10 comments

Hi! thanks for your contribution!, great first issue!

The problem does not seem to be present on the master branch, could you try upgrading?

So this seems to be a bug to be fixed by #1988 ?

The problem does not seem to be present on the master branch, could you try upgrading?

I am already on 0.7.6. So I am not sure how to upgrade to the master branch. Can you please guide?

bottom of the docs “bleeding edge”

I am now having the same question. I am using self.hparams with type of dict and on 0.7.6. Could someone give some suggestions?

I get the same error, while having hparams.lr, even with 0.8.0.

We need to adjust the learning rate finder to work with the new hparams. @SkafteNicki

@SkafteNicki is this still broken on master?

@edenlightning I checked this morning, and the problem still seems to be present. I will create a PR soon with a fix.

Was this page helpful?
0 / 5 - 0 ratings