Steps to reproduce the behavior:
Create a virtual environment
python -m venv venv
Activate the virtual environment
pip install torchvision
Run the following line:
import torchvision
Python 3.5.2 (default, Apr 16 2020, 17:47:17)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torchvision
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/var/www/rabbit.goodtrees.com/env/lib/python3.5/site-packages/torchvision/__init__.py", line 3, in <module>
from torchvision import models
File "/var/www/rabbit.goodtrees.com/env/lib/python3.5/site-packages/torchvision/models/__init__.py", line 12, in <module>
from . import detection
File "/var/www/rabbit.goodtrees.com/env/lib/python3.5/site-packages/torchvision/models/detection/__init__.py", line 1, in <module>
from .faster_rcnn import *
File "/var/www/rabbit.goodtrees.com/env/lib/python3.5/site-packages/torchvision/models/detection/faster_rcnn.py", line 14, in <module>
from .roi_heads import RoIHeads
File "/var/www/rabbit.goodtrees.com/env/lib/python3.5/site-packages/torchvision/models/detection/roi_heads.py", line 210, in <module>
@torch.jit.script
File "/var/www/rabbit.goodtrees.com/env/lib/python3.5/site-packages/torch/jit/__init__.py", line 1290, in script
fn = torch._C._jit_script_compile(qualified_name, ast, _rcb, get_default_args(obj))
File "/var/www/rabbit.goodtrees.com/env/lib/python3.5/site-packages/torch/jit/_recursive.py", line 568, in try_compile_fn
return torch.jit.script(fn, _rcb=rcb)
File "/var/www/rabbit.goodtrees.com/env/lib/python3.5/site-packages/torch/jit/__init__.py", line 1290, in script
fn = torch._C._jit_script_compile(qualified_name, ast, _rcb, get_default_args(obj))
File "/var/www/rabbit.goodtrees.com/env/lib/python3.5/site-packages/torch/jit/__init__.py", line 2030, in _get_overloads
compiled_fns.append(_compile_function_with_overload(overload_fn, qual_name, obj))
File "/var/www/rabbit.goodtrees.com/env/lib/python3.5/site-packages/torch/jit/__init__.py", line 2010, in _compile_function_with_overload
overload_signature = torch.jit.annotations.get_signature(overload_fn, None, None, inspect.ismethod(overload_fn))
File "/var/www/rabbit.goodtrees.com/env/lib/python3.5/site-packages/torch/jit/annotations.py", line 79, in get_signature
signature = parse_type_line(type_line, rcb, loc)
File "/var/www/rabbit.goodtrees.com/env/lib/python3.5/site-packages/torch/jit/annotations.py", line 165, in parse_type_line
arg_types = [ann_to_type(ann, loc) for ann in arg_ann]
File "/var/www/rabbit.goodtrees.com/env/lib/python3.5/site-packages/torch/jit/annotations.py", line 165, in <listcomp>
arg_types = [ann_to_type(ann, loc) for ann in arg_ann]
File "/var/www/rabbit.goodtrees.com/env/lib/python3.5/site-packages/torch/jit/annotations.py", line 303, in ann_to_type
the_type = try_ann_to_type(ann, loc)
File "/var/www/rabbit.goodtrees.com/env/lib/python3.5/site-packages/torch/jit/annotations.py", line 296, in try_ann_to_type
the_type = torch._C._resolve_type_from_object(ann, loc, fake_rcb)
TypeError: _resolve_type_from_object(): incompatible function arguments. The following argument types are supported:
1. (arg0: object, arg1: torch._C._jit_tree_views.SourceRange, arg2: Callable[[str], function]) -> torch._C.Type
Invoked with: typing.Union[int, NoneType], None, <function try_ann_to_type.<locals>.fake_rcb at 0x7f566909fd90>
>>>
Expected it to import successfully
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
Collecting environment information...
PyTorch version: 1.5.0
Is debug build: No
CUDA used to build PyTorch: 10.2
OS: Ubuntu 16.04.3 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
CMake version: Could not collect
Python version: 3.5
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip3] numpy==1.18.3
[pip3] torch==1.5.0
[pip3] torchvision==0.6.0
[conda] Could not collect
I tried it with pip install torchvision==0.5 and it worked.
I cannot reproduce with conda on macos or linux:
conda create -n torchvision python=3.5
conda activate torchvision
pip install torchvision
python -c "import torchvision; print(torchvision.__version__)"
# 0.6.0
Can you make sure the virtual environment links correctly?
How do I check if it's linking correctly? If its not reproducible I can just use 0.5.0 and close this issue
If you could import, I would have checked torchvision.__file__ but you can't import :)
Have you tried with conda then? Have made sure pip (or conda) are up-to-date?
Exact same issue and log. Ubuntu 16.04.6 and virtualenv here.
This could be an issue with torchscript not properly supporting Python 3.5, although I'm not 100% sure about it.
cc @eellison for the torchscript error, and @seemethere for the (potential) issue with binaries.
Or, another option could be that virtualenv doesn't keep the original .py sources but only .pyc, but that's a guess that need to be verified.
I have the same problem on Ubuntu 16.04 and
Python 3.5.2 (default, Apr 16 2020, 17:47:17)
[GCC 5.4.0 20160609].
I am running everything in virtualenv and in a jupyter notebook.
Name: torch
Version: 1.5.0
Name: torchvision
Version: 0.6.0
same problem and same env like ahead.
Hi,
My guess is that virtualenv is creating symlinks for the .pyc files, but not keeping the .py files.
Can you try creating an environment with python -m venv --copies flag, to see if this fixes it?
Added the copies flag, still get the error
Same issue here, Ubuntu 16.04, Python 3.5.2, PyTroch 1.5, torchvision 0.6. If I try to install torchvision 0.5, pip will try to also reinstall the previous torch version...
Downgrading to 1.4 works
python3 -m pip install torch==1.4.0+cu92 torchvision==0.5.0+cu92 -f https://download.pytorch.org/whl/torch_stable.html
I can confirm that downgrading to 1.4 and 0.5.0 works
Still, downgrading to 1.4 is not a solution for some since 1.4 release has a bug that precludes using integer nn.Parameter layers with disabled gradients while utilising DistributedDataParallel:
https://github.com/pytorch/pytorch/issues/32018
Would be good to have Python 3.5 support restored in the upcoming torchvision versions.
@vshampor Is it possible for you to upgrade to a more recent version of Python?
In my experience the transition I've found the transition from Python 3.5 to 3.6 to be relatively painless.
The number of users on Python 3.5 is relatively small so we've decided to drop support in hopes that our engineers can have a more focused view when it comes to debugging and solving issues.
@seemethere, that's what we had to do, and it did not cause much pain indeed, but I suppose that the official PyTorch website should be updated accordingly to explicitly state that only Python >= 3.6 is supported for PyTorch >= 1.5. Currently the website rather reads as "Python >= 3.6 is recommended", at least for the pip-Linux part of the installation instructions.
I was facing the same issue so i downgraded the torch version to 0.5 and it worked.
FYI this issue should have been fixed in torchvision 0.7.0
Most helpful comment
Downgrading to 1.4 works
python3 -m pip install torch==1.4.0+cu92 torchvision==0.5.0+cu92 -f https://download.pytorch.org/whl/torch_stable.html