When running my model I get the error message: TypeError: validation_step() takes 3 positional arguments but 4 were given
Stacktrace:
line 106, in <module> trainer.fit(model)
line 707, in fit self.run_pretrain_routine(model)
line 812, in run_pretrain_routine self.evaluate(model, self.get_val_dataloaders(),self.num_sanity_val_steps, self.testing)
line 234, in evaluate
test)
line 365, in evaluation_forward
output = model.validation_step(*args)
Steps to reproduce the behavior:
pip install pytorch-lightningHere is my validation step:
def validation_step(self, val_batch, batch_idx):
x, y = val_batch
logits = self.forward(x)
loss = self.cross_entropy_loss(logits, y)
return {'val_loss': loss}
No error
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.2.89
GPU models and configuration: GPU 0: GeForce GTX 1050 Ti
Nvidia driver version: 442.19
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy==1.16.1
[conda] blas 1.0 mkl
[conda] mkl 2020.0 166
[conda] mkl-service 2.3.0 py37hb782905_0
[conda] mkl_fft 1.0.15 py37h14836fe_0
[conda] mkl_random 1.1.0 py37h675688f_0
[conda] pytorch 1.4.0 py3.7_cuda101_cudnn7_0 pytorch
[conda] pytorch-ignite 0.4.0.dev20200229 pypi_0 pypi
[conda] pytorch-lightning 0.6.0 pypi_0 pypi
[conda] torchvision 0.4.1 pypi_0 pypi
[conda] torchviz 0.0.1 pypi_0 pypi
Related Issue I think:
https://github.com/PyTorchLightning/pytorch-lightning/issues/105
This should happen when you return multiple dataloaders. In that case, the signature of validation step should be
def validation_step(self, val_batch, batch_idx, dataset_idx):
Did you copy and run the full MNIST example given in the blog post you linked?
Yeah, I did copy the full code on the blog. I'll try to add dataset_idx to the validation step once I come home.
I've encountered the same problem:
TypeError: validation_step() takes 3 positional arguments but 4 were given
Whether running this code or the full version at the end of the colab
Using:
Looks like these notebooks install master, not 0.6.0.
If I add dataset_idx to my parameters for the validation_step I get the error:
line 56, in validation_step x, y = val_batch
ValueError: too many values to unpack (expected 2)
I encountered this - you need to add the @pl.data_loader decorator above all your dataloader functions. The examples sometimes use master and seem to be inconsistent.
Thanks for your fix. I had actually seen this in the Github Doc, just did not think to implement it. For me it fixes the error I was having.
For me, it fixed it too thanks.
Correct me if I am wrong In the PyTorch-lightning version 0.6.0 you need
@pl.data_loader decorator and in the masterversion not?
I'm not fully clear on this - it looks like the basic mnist example does use the decorator. It's not clear if this example is out of date or if other examples are out of date.
Should I close this issue? Or is this a bug?
It should stay open. The examples must be updated so they can work with the upcoming release 0.7.
@williamFalcon bringing this to your attention
@LuposX fixed on 0.7.1.
Try again!
Happy to reopen the issue if it's still there.
Also, the docs are much more clear now!
Most helpful comment
I encountered this - you need to add the
@pl.data_loaderdecorator above all your dataloader functions. The examples sometimes use master and seem to be inconsistent.