Pytorch-lightning: TypeError: __init__() got an unexpected keyword argument 'default_save_path'

Created on 6 Nov 2020  ยท  4Comments  ยท  Source: PyTorchLightning/pytorch-lightning

โ“ Questions and Help

What is your question?

I was trying to run the VAE codes from PyTorch-VAE https://github.com/AntixK/PyTorch-VAE
Managed to resolve some version issues like Logging/Logger, but I ran into this one. And I have no idea what this error is. I tried looking it up but I haven't found one for default_save_path.

Traceback (most recent call last):
File "C:/Users/roonl/AppData/Roaming/JetBrains/PyCharmCE2020.1/PythonDRL/Coding/PyTorch-VAE-master/run.py", line 47, in
runner = Trainer(default_save_path=f"C:/Users/roonl/AppData/Roaming/JetBrains/PyCharmCE2020.1/PythonDRL/Coding/PyTorch-VAE-master/Mytests",
File "C:\Users\roonl\AppData\Local\Programs\Python\Python38\lib\site-packages\pytorch_lightning\trainer\connectors\env_vars_connector.py", line 41, in overwrite_by_env_vars
return fn(self, **kwargs)
TypeError: __ init __() got an unexpected keyword argument 'default_save_path'

I'm not the best with Python, and I would really appreciate some pointers!

Code

The code I was running was the run.py in that repository.

parser = argparse.ArgumentParser(description='Generic runner for VAE models')
parser.add_argument('--config', '-c',
dest="filename",
metavar='FILE',
help = 'path to the config file',
default='configs/vae.yaml')

args = parser.parse_args()
with open(args.filename, 'r') as file:
try:
config = yaml.safe_load(file)
except yaml.YAMLError as exc:
print(exc)

tt_logger = TestTubeLogger(
save_dir=config['logging_params']['save_dir'],
name=config['logging_params']['name'],
debug=False,
create_git_tag=False,
)

torch.manual_seed(config['logging_params']['manual_seed'])
np.random.seed(config['logging_params']['manual_seed'])
cudnn.deterministic = True
cudnn.benchmark = False

model = vae_modelsconfig['model_params']['name']
experiment = VAEXperiment(model,
config['exp_params'])

runner = Trainer(default_save_path=f"{tt_logger.save_dir}",
min_nb_epochs=1,
logger=tt_logger,
log_save_interval=100,
train_percent_check=1.,
val_percent_check=1.,
num_sanity_val_steps=5,
early_stop_callback = False,
**config['trainer_params'])

print(f"======= Training {config['model_params']['name']} =======")
runner.fit(experiment)

What's your environment?

I'm running Python3.8-64 bit with the latest PyTorch, PyTorch-Lightning packages, in Windows 10.

question

All 4 comments

Hi! thanks for your contribution!, great first issue!

runner = Trainer(default_save_path=f"{tt_logger.save_dir}"

The error is coming from here.

Should be default_root_dir, i think.
https://github.com/PyTorchLightning/pytorch-lightning/blob/5e09fd31e9850902fc0a79e92aaba7b80dae1944/pytorch_lightning/trainer/trainer.py#L182-L184

Thank you! That resolved my original issue! I see that the other Trainer parameters are from older versions too. Is there a place I can check which old parameter is related to whichever new one?
I'm having trouble finding:
log_save_interval (I assume would be log_every_n_steps)
train_percent_check (I assume would be limit_train_batches)
val_percent_check (I assume would be val_check_interval)
early_stop_callback (I cannot find one similar)

@RoonLoe early_stop_callback -> callbacks
the rest you assume are right.
You can only browse old version docs to check old to new params

Closing it as it is resolved, feel free to reopen if needed

Was this page helpful?
0 / 5 - 0 ratings

Related issues

Vichoko picture Vichoko  ยท  3Comments

monney picture monney  ยท  3Comments

edenlightning picture edenlightning  ยท  3Comments

baeseongsu picture baeseongsu  ยท  3Comments

versatran01 picture versatran01  ยท  3Comments