I am using auto_lr_find feature as below.
trainer = pl.Trainer(fast_dev_run=False, gpus=1, auto_lr_find=True)
My model has the self.learning_rate parameter as below (part of the model).
class TweetSegment(pl.LightningModule):
def __init__(self, config, lr=3e-5):
super(TweetSegment, self).__init__()
self.bert = BertModel.from_pretrained('bert-base-uncased', config=config)
self.drop_out = nn.Dropout(0.1)
self.fullyConnected = nn.Sequential(nn.Linear(2*768, 2), nn.ReLU())
self.learning_rate = lr
self._init_initial()
def configure_optimizers(self):
return torch.optim.AdamW(self.parameters(), lr=self.learning_rate)
When I 'fit' using below line
trainer.fit(tweetModel, train_dataloader=training_loader, val_dataloaders=valid_loader)
I still get the error
MisconfigurationException: When auto_lr_find is set to True, expects that hparams either has fieldlrorlearning_ratethat can overridden
No error while running the 'fit'
Hi! thanks for your contribution!, great first issue!
The problem does not seem to be present on the master branch, could you try upgrading?
So this seems to be a bug to be fixed by #1988 ?
The problem does not seem to be present on the master branch, could you try upgrading?
I am already on 0.7.6. So I am not sure how to upgrade to the master branch. Can you please guide?
bottom of the docs “bleeding edge”
I am now having the same question. I am using self.hparams with type of dict and on 0.7.6. Could someone give some suggestions?
I get the same error, while having hparams.lr, even with 0.8.0.
We need to adjust the learning rate finder to work with the new hparams. @SkafteNicki
@SkafteNicki is this still broken on master?
@edenlightning I checked this morning, and the problem still seems to be present. I will create a PR soon with a fix.
Most helpful comment
I get the same error, while having hparams.lr, even with 0.8.0.