Pytorch-lightning: Wandb Flatten Dict

Created on 2 Jul 2020  路  3Comments  路  Source: PyTorchLightning/pytorch-lightning

Wandb logger should flatten the dictionary of parameters before logging. Every other logger has the bellow pattern of code:

 params = self._convert_params(params)
 params = self._flatten_dict(params)

馃悰 Bug

Wandb logger does not flatten parameters resulting in dictionaries being logged to Wandb, which are not searchable causing for some loss of features in wandb.

To Reproduce

Run the cpu_template with wandb logger, and log a nested dictionary.

Expected behavior

Solution, just call params = self._flatten_dict(params) this in the wandb logger.

Environment

  • CUDA:

    • GPU:

    • available: False

    • version: None

  • Packages:

    • numpy: 1.18.5

    • pyTorch_debug: False

    • pyTorch_version: 1.5.0

    • pytorch-lightning: 0.8.4

    • tensorboard: 2.2.2

    • tqdm: 4.46.1

  • System:

    • OS: Darwin

    • architecture:



      • 64bit


        -



    • processor: i386

    • python: 3.7.7

    • version: Darwin Kernel Version 19.4.0: Wed Mar 4 22:28:40 PST 2020; root:xnu-6153.101.6~15/RELEASE_X86_64

Logger bug / fix help wanted

Most helpful comment

Exactly, this commit will make hparams look like:

Screen Shot 2020-07-06 at 7 08 59 AM

All 3 comments

image

Is this what you mean?
I agree, it would be nice if we could flatten that out.

Exactly, this commit will make hparams look like:

Screen Shot 2020-07-06 at 7 08 59 AM

perfect!

Was this page helpful?
0 / 5 - 0 ratings

Related issues

polars05 picture polars05  路  34Comments

dsuess picture dsuess  路  26Comments

BraveDistribution picture BraveDistribution  路  31Comments

hadim picture hadim  路  29Comments

dschaehi picture dschaehi  路  31Comments