Pytorch-lightning: AttributeError: module 'logging' has no attribute 'TensorBoardLogger'

Created on 22 Jan 2020  ·  12Comments  ·  Source: PyTorchLightning/pytorch-lightning

🐛 Bug

Following the docs, I tried:

import pytorch_lightning as pl
logger = pl.logging.TensorBoardLogger(...)

But I receive an error:
AttributeError: module 'logging' has no attribute 'TensorBoardLogger'

To Reproduce

ubuntu@ip-172-31-41-72:~$ mkdir pltest

ubuntu@ip-172-31-41-72:~$ cd pltest/

ubuntu@ip-172-31-41-72:~/pltest$ pipenv --python 3.7
Creating a virtualenv for this project…
Pipfile: /home/ubuntu/pltest/Pipfile
Using /usr/bin/python3.7 (3.7.3) to create virtualenv…
⠋ Creating virtual environment...Already using interpreter /usr/bin/python3.7
Using base prefix '/usr'
New python executable in /home/ubuntu/.local/share/virtualenvs/pltest-rVdlPDKy/bin/python3.7
Also creating executable in /home/ubuntu/.local/share/virtualenvs/pltest-rVdlPDKy/bin/python
Installing setuptools, pip, wheel...
done.
✔ Successfully created virtual environment! 
Virtualenv location: /home/ubuntu/.local/share/virtualenvs/pltest-rVdlPDKy
Creating a Pipfile for this project…

ubuntu@ip-172-31-41-72:~/pltest$ pipenv install pytorch-lightning
Installing pytorch-lightning…
✔ Installation Succeeded 
Pipfile.lock not found, creating…
Locking [dev-packages] dependencies…
Locking [packages] dependencies…
✔ Success! 
Updated Pipfile.lock (d6238a)!
Installing dependencies from Pipfile.lock (d6238a)…
  🐍   ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 34/34 — 00:00:39
To activate this project's virtualenv, run pipenv shell.
Alternatively, run a command inside the virtualenv with pipenv run.

ubuntu@ip-172-31-41-72:~/pltest$ pipenv graph | grep lightning
pytorch-lightning==0.6.0

ubuntu@ip-172-31-41-72:~/pltest$ pipenv run python
Python 3.7.3 (default, Mar 26 2019, 01:59:45) 
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pytorch_lightning as pl
>>> pl.logging.TestTubeLogger
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: module 'logging' has no attribute 'TestTubeLogger'
>>> pl.logging.TensorBoardLogger
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: module 'logging' has no attribute 'TensorBoardLogger'
>>> pl.logging.__dict__.keys()
dict_keys(['__name__', '__doc__', '__package__', '__loader__', '__spec__', '__path__', '__file__', '__cached__', '__builtins__', 'sys', 'os', 'time', 'io', 'traceback', 'warnings', 'weakref', 'collections', 'Template', '__all__', 'threading', '__author__', '__status__', '__version__', '__date__', '_startTime', 'raiseExceptions', 'logThreads', 'logMultiprocessing', 'logProcesses', 'CRITICAL', 'FATAL', 'ERROR', 'WARNING', 'WARN', 'INFO', 'DEBUG', 'NOTSET', '_levelToName', '_nameToLevel', 'getLevelName', 'addLevelName', 'currentframe', '_srcfile', '_checkLevel', '_lock', '_acquireLock', '_releaseLock', '_at_fork_acquire_release_weakset', '_register_at_fork_acquire_release', '_at_fork_weak_calls', '_before_at_fork_weak_calls', '_after_at_fork_weak_calls', 'LogRecord', '_logRecordFactory', 'setLogRecordFactory', 'getLogRecordFactory', 'makeLogRecord', 'PercentStyle', 'StrFormatStyle', 'StringTemplateStyle', 'BASIC_FORMAT', '_STYLES', 'Formatter', '_defaultFormatter', 'BufferingFormatter', 'Filter', 'Filterer', '_handlers', '_handlerList', '_removeHandlerRef', '_addHandlerRef', 'Handler', 'StreamHandler', 'FileHandler', '_StderrHandler', '_defaultLastResort', 'lastResort', 'PlaceHolder', 'setLoggerClass', 'getLoggerClass', 'Manager', 'Logger', 'RootLogger', '_loggerClass', 'LoggerAdapter', 'root', 'basicConfig', 'getLogger', 'critical', 'fatal', 'error', 'exception', 'warning', 'warn', 'info', 'debug', 'log', 'disable', 'shutdown', 'atexit', 'NullHandler', '_warnings_showwarning', '_showwarning', 'captureWarnings'])
>>> from pytorch_lightning.logging.tensorboard import TensorBoardLogger
>>> TensorBoardLogger
<class 'pytorch_lightning.logging.tensorboard.TensorBoardLogger'>
>>> 

Code sample

Expected behavior

Environment

Collecting environment information...
PyTorch version: 1.3.1
Is debug build: No
CUDA used to build PyTorch: 10.1.243

OS: Ubuntu 16.04.6 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
CMake version: Could not collect

Python version: 3.7
Is CUDA available: No
CUDA runtime version: 10.0.130
GPU models and configuration: GPU 0: Tesla V100-SXM2-16GB
Nvidia driver version: 410.104
cuDNN version: Could not collect

Versions of relevant libraries:
[pip3] numpy==1.18.1
[pip3] pytorch-lightning==0.6.0
[pip3] torch==1.3.1
[pip3] torchvision==0.4.2
[conda] Could not collect

Additional context

bug / fix good first issue help wanted

Most helpful comment

Having the same issue but from pytorch_lightning.logging.tensorboard import TensorBoardLogger works for me.

All 12 comments

Having the same issue but from pytorch_lightning.logging.tensorboard import TensorBoardLogger works for me.

sounds like an issue how the TensorBoardLogger is imported in pytorch_lightning.logging
see https://realpython.com/python-modules-packages/#python-packages

Yeah there is something odd with the logging submodule. I am getting this:

In [17]: pytorch_lightning.__path__                              
Out[17]: ['/Users/j.grenier/.virtualenvs/pytorch_mnist/lib/python3.7/site-packages/pytorch_lightning']

In [18]: pytorch_lightning.logging.__path__                      
Out[18]: ['/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/logging']

Notice that pytorch_lighting.logging is pointing the standard python3.7 logging module.

wow, good point... :+1:

I'm having a similar problems like many others with the loggers. When I run my code:

trainer = pl.Trainer(train_percent_check=0.1)
trainer.fit(model)

I get the following error:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-10-1aee9f118f5f> in <module>
      1 trainer = pl.Trainer(train_percent_check=0.1)
----> 2 trainer.fit(model)

~/miniconda3/envs/adv/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in fit(self, model)
    755             self.optimizers, self.lr_schedulers = self.init_optimizers(model.configure_optimizers())
    756 
--> 757             self.run_pretrain_routine(model)
    758 
    759         # return 1 when finished

~/miniconda3/envs/adv/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in run_pretrain_routine(self, model)
    804             # save exp to get started
    805             if hasattr(ref_model, "hparams"):
--> 806                 self.logger.log_hyperparams(ref_model.hparams)
    807 
    808             self.logger.save()

~/miniconda3/envs/adv/lib/python3.7/site-packages/pytorch_lightning/logging/base.py in wrapped_fn(self, *args, **kwargs)
     12     def wrapped_fn(self, *args, **kwargs):
     13         if self.rank == 0:
---> 14             fn(self, *args, **kwargs)
     15 
     16     return wrapped_fn

~/miniconda3/envs/adv/lib/python3.7/site-packages/pytorch_lightning/logging/tensorboard.py in log_hyperparams(self, params)
     86         else:
     87             # `add_hparams` requires both - hparams and metric
---> 88             self.experiment.add_hparams(hparam_dict=params, metric_dict={})
     89         # some alternative should be added
     90         self.tags.update(params)

~/miniconda3/envs/adv/lib/python3.7/site-packages/torch/utils/tensorboard/writer.py in add_hparams(self, hparam_dict, metric_dict)
    298         if type(hparam_dict) is not dict or type(metric_dict) is not dict:
    299             raise TypeError('hparam_dict and metric_dict should be dictionary.')
--> 300         exp, ssi, sei = hparams(hparam_dict, metric_dict)
    301 
    302         logdir = os.path.join(

~/miniconda3/envs/adv/lib/python3.7/site-packages/torch/utils/tensorboard/summary.py in hparams(hparam_dict, metric_dict)
    154             ssi.hparams[k].number_value = v
    155             continue
--> 156         raise ValueError('value should be one of int, float, str, bool, or torch.Tensor')
    157 
    158     content = HParamsPluginData(session_start_info=ssi,

ValueError: value should be one of int, float, str, bool, or torch.Tensor

I thought I could create a test-tube logger, but the logging module does not have that. I have the latest release installed through

python -m pip install git+https://github.com/williamFalcon/pytorch-lightning.git@master --upgrade

I can't train my models. What are my options here?

We are working on a fix, check following PR #767

Thank you @Borda. Since I'm not able to train my model without a logger, are there any other options other than waiting for the fix to come in?

Train it with a logger using advice above for alternative import statement?

On Thu, Jan 30, 2020 at 2:41 PM sudarshan notifications@github.com wrote:

Thank you @Borda https://github.com/Borda. Since I'm not able to train
my model without a logger, are there any other options other than waiting
for the fix to come in?


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/PyTorchLightning/pytorch-lightning/issues/731?email_source=notifications&email_token=AABNMQ3B4PMPHUQVNACP4XTRAMUPBA5CNFSM4KKNN3T2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEKMIJ7A#issuecomment-580420860,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AABNMQ3EELDSTG6OO4LILPLRAMUPBANCNFSM4KKNN3TQ
.

Train it with a logger using advice above for alternative import statement?

On Thu, Jan 30, 2020 at 2:41 PM sudarshan @.*> wrote: Thank you @Borda https://github.com/Borda. Since I'm not able to train my model without a logger, are there any other options other than waiting for the fix to come in? — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub <#731?email_source=notifications&email_token=AABNMQ3B4PMPHUQVNACP4XTRAMUPBA5CNFSM4KKNN3T2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEKMIJ7A#issuecomment-580420860>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABNMQ3EELDSTG6OO4LILPLRAMUPBANCNFSM4KKNN3TQ .

I'm assuming you are referring to @chutaklee reply about using from pytorch_lightning.logging.tensorboard import TensorBoardLogger. Unfortunately, this throws an error about tensorboard not being found. This is because as stated by @Borda earlier, the pl.logging module currently points to Python's logging module.

Really? Check out my initial description, under To Reproduce: I install pytorch-lightning (nothing else), and then am able to access TensorBoardLogger via from pytorch_lightning.logging.tensorboard import TensorBoardLogger. Maybe you're using an older version of pytorch which doesn't include tensorboard?

Well we have a short partial simple fix in #768

The work around from @colllin didn't work for me, but from pytorch_lightning.logging import TensorBoardLogger does.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

DavidRuhe picture DavidRuhe  ·  3Comments

polars05 picture polars05  ·  3Comments

williamFalcon picture williamFalcon  ·  3Comments

anthonytec2 picture anthonytec2  ·  3Comments

iakremnev picture iakremnev  ·  3Comments