Receive this warning when running a simple Lightning module, is it related to the recent update? Maybe this warning is related to the past hyperparameters design?
[path to]/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:23: UserWarning: Did not find hyperparameters at model hparams. Saving checkpoint without hyperparameters.
warnings.warn(*args, **kwargs)
Use the following code sample
import os
import torch
from torch.nn import functional as F
from torchvision.datasets import MNIST
from torch.utils.data import DataLoader
from torchvision import transforms
from pytorch_lightning import LightningModule
from pytorch_lightning import Trainer
from pytorch_lightning.loggers import TensorBoardLogger
from torch.utils.tensorboard import SummaryWriter
class MyNet(LightningModule):
def __init__(self):
super(MyNet, self).__init__()
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.001)
def train_dataloader(self):
dataset = MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor())
loader = DataLoader(dataset, batch_size=32, num_workers=4, shuffle=True)
return loader
def test_dataloader(self):
dataset = MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor())
loader = DataLoader(dataset, batch_size=32, num_workers=4, shuffle=False)
return loader
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def test_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
tensorboard_logs = {'test_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def test_epoch_end(self, output):
with SummaryWriter(self.logger.log_dir) as w:
for i in range(5):
w.add_hparams({'lr': 0.1 * i, 'bsize': i}, {'hparam/accuracy': 10 * i, 'hparam/loss': 10 * i})
return {}
dir_path = "."
tb_logger = TensorBoardLogger(dir_path, name='run2')
model = MyNet()
trainer = Trainer(gpus=1, max_epochs=1, logger=tb_logger)
trainer.fit(model)
trainer.test()
Hi there!
It seems to me that the problem you are facing is caused by the version of PyTorch Lightning that you're using. If I'm not mistaken, all the get-rid-of-hparams features were added after the 0.7.6 was released. Try upgrading to the latest version of the library, although I'm not completely sure that it's safe to use right now. After the upgrade, I get this mysterious warning first time I run a cell with the code you've wrote. I haven't figured out yet whether it's fine or not.

Oh I see. That is possible, so maybe it is because the new features have not integrated into the stable version yet.
it鈥檚 safe. The current change is that the user has to call self.auto_collect_arguments() to save all args to the checkpoint automatically.
we need to decide if we keep it that way or go back to auto-doing it.
@Borda
If I'm not mistaken, all the get-rid-of-hparams features were added after the 0.7.6 was released.
the parsing init argument is still unreleased change, #1896
After the upgrade, I get this mysterious warning first time I run a cell with the code you've wrote. I haven't figured out yet whether it's fine or not.
the same was reported here #1976
Most helpful comment
it鈥檚 safe. The current change is that the user has to call self.auto_collect_arguments() to save all args to the checkpoint automatically.
we need to decide if we keep it that way or go back to auto-doing it.
@Borda