Pytorch: [v1.5.0] Release Tracker

Created on 19 Mar 2020  路  72Comments  路  Source: pytorch/pytorch

The 1.5 release branch has been cut!

This issue is for tracking cherry-picks to the release branch. The criteria for a change to be included is ALL of the following:

  1. bug fix or documentation fix/improvement (i.e. NOT a feature)
  2. already landed in master or specific to 1.5 (e.g. deprecating a feature in 1.5)
  3. worth the risk/reward. For example, a large change that fixes a tiny bug would not be included.

If you need a change to be cherry-picked to the branch, please open a PR against the release/1.5 branch and comment below. Any details you can provide about the criteria above (e.g. links to master PR, description of the severity of the bug fix) are helpful. Ultimately we will make an inclusion / no inclusion call and communicate that here.

NOTE: do not land the PRs yourself. Our normal tools (ghstack / ghimport, etc.) do not work on the release branch. Someone else will land your change for you.


Current PRs open against the release/1.5

triaged

Most helpful comment

https://github.com/pytorch/pytorch/pull/35340 may be a candidate. Autocasting completes Pytorch's native automatic mixed precision support, which I've been writing for over 6 months now targeting 1.5. It has thorough documentation and test coverage. It was approved and merged already, but reverted for minor fixes.

Update: PR against master https://github.com/pytorch/pytorch/pull/35102 has been re-merged and stuck the landing so far. https://github.com/pytorch/pytorch/pull/35340 cherry-picks those diffs onto 1.5 (+3 lines of cosmetic docstring changes).


Feature, sorry :(. It's in nightlies though.

All 72 comments

Docs change once it lands https://github.com/pytorch/pytorch/pull/35007

EDIT: release/1.5 PR https://github.com/pytorch/pytorch/pull/35043


This was included when we fast forwarded the release branch.

34934 fixes of maxpool (somehow didn't make it into release cut)


This was included when we fast forwarded the release branch.

@jamesr66a: looks good to go, can you open a PR against the release branch?

https://github.com/pytorch/pytorch/pull/35340 may be a candidate. Autocasting completes Pytorch's native automatic mixed precision support, which I've been writing for over 6 months now targeting 1.5. It has thorough documentation and test coverage. It was approved and merged already, but reverted for minor fixes.

Update: PR against master https://github.com/pytorch/pytorch/pull/35102 has been re-merged and stuck the landing so far. https://github.com/pytorch/pytorch/pull/35340 cherry-picks those diffs onto 1.5 (+3 lines of cosmetic docstring changes).


Feature, sorry :(. It's in nightlies though.

C++ frontend fixes:

Cherry-picked into 1.5:

  • #35022 Fix AdaptiveAvgPool{2,3}d and AdaptiveMaxPool{2,3}d implementation (release/1.5 PR: https://github.com/pytorch/pytorch/pull/35380)
  • #35023 Fix Conv and ConvTranspose implementation (release/1.5 PR: https://github.com/pytorch/pytorch/pull/35380)
  • #35024 Fix fractional_max_pool3d_with_indices implementation (release/1.5 PR: https://github.com/pytorch/pytorch/pull/35380)
  • #35025 Fix F::interpolate and torch::nn::Upsample implementation (release/1.5 PR: https://github.com/pytorch/pytorch/pull/35380)
  • #35147 Add inplace tests for several torch::nn modules / functionals (release/1.5 PR: https://github.com/pytorch/pytorch/pull/35380)
  • #35163 Renaming: MultiLabelMarginLossFuncOptions -> MultilabelMarginLossFuncOptions, MultiLabelSoftMarginLossFuncOptions -> MultilabelSoftMarginLossFuncOptions (release/1.5 PR: https://github.com/pytorch/pytorch/pull/35380)
  • #35001 Add xor_convergence test for lbfgs (release/1.5 PR: https://github.com/pytorch/pytorch/pull/35440)
  • #34957 Merge Optimizer and LossClosureOptimizer (release/1.5 PR: https://github.com/pytorch/pytorch/pull/35439)

Merged

~https://github.com/pytorch/pytorch/pull/35133~

^We'd like to get this in because it aligns torchbind's qualified names with the custom op API, and sets us up to converge the two in the future. Getting this into 1.5 would allow us to save a lot of BC headache in the future

EDIT: Was reverted. Will update with superseding PR soon
EDIT2: Updated PR https://github.com/pytorch/pytorch/pull/35303


Merged

https://github.com/pytorch/pytorch/pull/35146

^ Docs fix of DP vs DDP on CUDA


Merged

Leaving it here because this PR fixes a high priority issue. This PR adds Support for Tensor Shape Type Hint.


This made the release branch.

https://github.com/pytorch/pytorch/pull/35242

^ Script fix to unblock all of CI


Merged

https://github.com/pytorch/pytorch/pull/35275

^ Load torch_global_deps for Windows

EDIT: Was abandoned. Will update with superseding PR soon


Abandoned

https://github.com/pytorch/pytorch/pull/35310

^ Switching default CUDA to 10.2 for wheels


Merged

https://github.com/pytorch/pytorch/pull/35315

Pin XLA CI to use r1.5 release branch.


merged

35321, required for porting #34794, the latter of which will let users workaround torch.div using either torch.floor_divide or torch.true_divide as appropriate.


Merged

35368

^ Load all DLLs in the lib directory for Windows


Merged

Ready patches for PTD:

~#35262~ #35514 enforce rref JIT pickling to be in the scope of rpc calls (port for #34689)
~#35264~ #35513 Enforce rref python pickling to be in the scope of RPC call (port for #34755)

WIP doc patches for PTD:

35109 Refactored rpc docs (will need a new PR against release/1.5 after land to master)


Merged non-WIP patches.

cc @zhaojuanmao @rohan-varma

35390, which ports #34794, which adds a torch.true_divide method variant. This lets users workaround torch.div using either torch.floor_divide or torch.true_divide in all scenarios.


Merged

35402 Fix RPC test_torchscript_functions_not_supported failure on the release branch. (port for #35283)

cc @xush6528


Merged

[doc]Updating view op list https://github.com/pytorch/pytorch/pull/35403


Merged

https://github.com/pytorch/pytorch/pull/35412

^ To add future as a dependency for all of our packages


Merged

https://github.com/pytorch/pytorch/pull/35413 bump up ONNX exporter's version (original: https://github.com/pytorch/pytorch/pull/35059)

https://github.com/pytorch/pytorch/pull/35416 fix torch.mm in ONNX exporter (original: https://github.com/pytorch/pytorch/pull/34661)


Merged

https://github.com/pytorch/pytorch/pull/35434 contains doc strings and documentation updates for channels last


Merged

35450 , adding a warning for a known autograd issue in XLA backend.


Merged

35573, port of #35288: fix Caffe2 mobile compilation


Merged

35340 may be a candidate. Autocasting completes Pytorch's native automatic mixed precision support, which I've been writing for over 6 months now targeting 1.5. It has thorough documentation and test coverage. It was approved and merged already, but reverted for minor fixes.

Update: PR against master #35102 has been re-merged and stuck the landing so far. #35340 cherry-picks those diffs onto 1.5 (+3 lines of cosmetic docstring changes).

Feature, sorry :(. It's in nightlies though.

Hi @mcarilli , so is it going to merge into 1.5 or we can only get it from 1.6 nightly buid? (there is no 1.5 nightly build now)

35668, port of #35659: [Windows] make torch_cuda's forced link also work for CMake


Merged

@wenhui-prudencemed Looks like autocast won't be merged to 1.5, because it it's a new feature (fails to satisfy 1. in Greg's original post).

You can use it via nightlies or master now. See master documentation:
https://pytorch.org/docs/master/amp.html
https://pytorch.org/docs/master/notes/amp_examples.html

https://github.com/pytorch/pytorch/pull/35579 Disabling complex construction because we want to release all complex related features in 1.6


Merged

https://github.com/pytorch/pytorch/pull/35745 adds a warning that 1.5's implementation of automatic mixed precision is incomplete and points people to the master branch/nightly builds.


Merged

https://github.com/pytorch/pytorch/pull/35772 ports disabling torch.imag.


Merged

Ready for cherry-pick:

  • #35777 Improve C++ API autograd and indexing docs (cherry-pick PR: https://github.com/pytorch/pytorch/pull/35919)
  • #35190 Refactor C++ API parity test mechanism and turn it on in CI again (cherry-pick PR: https://github.com/pytorch/pytorch/pull/35960)
  • #35974 Use std::abs instead of abs in lbfgs.cpp (cherry-pick PR: https://github.com/pytorch/pytorch/pull/36033)

Merged

https://github.com/pytorch/pytorch/pull/35808, port of #35109. Only touches RPC documentation


Merged

35890, port of #35862. Removes potentially confusing integer div warnings.


Merged

https://github.com/pytorch/pytorch/pull/36111

Fixes python type annotations for torchbind types in certain situations


Merged

https://github.com/pytorch/pytorch/pull/36116 (Errored)

https://github.com/pytorch/pytorch/pull/36338 (Fixed errors)

Update docs for 1.5 to remove Python 2 references


Merged

36141 port of #36095

fixed issue #36046.


Merged

https://github.com/pytorch/pytorch/pull/36126, port of https://github.com/pytorch/pytorch/pull/36052


Current plan is not to cherry-pick it; we want more signal / work done on the warning mechanism.

https://github.com/pytorch/pytorch/pull/36165

Group libraries in docs table of contents and add link to new PyTorch Elastic page


Merged

https://github.com/pytorch/pytorch/pull/36245, port of https://github.com/pytorch/pytorch/pull/36161 C++ Adam optimizer - corrected messages for check of default options.


Merged

https://github.com/pytorch/pytorch/pull/36274, port of https://github.com/pytorch/pytorch/pull/35601 fix is_float_scale_factor warning (python and c++)


Merged

Not sure if this is the right place, but I would consider issue #36378 to be a blocker for release, as for example it prevents building torchvision.

https://github.com/pytorch/pytorch/pull/36514

minor change to fix formatting errors in the warnings added by https://github.com/pytorch/pytorch/pull/35745.


Merged

Any possibility of sneaking #35352 into 1.5.0?


Nope, feature.

AT_CHECK is gone, what's alternative?

36537, port of #36656

Add a warning for Single-Process Multi-GPU DDP


Merged

AT_CHECK is gone, what's alternative?

@jinfagang TORCH_CHECK is the alternative.

Any possibility of sneaking #35352 into 1.5.0?

No, this is a feature.

36658 Migrate release CI jobs to CircleCI for Windows (v1.5 Release)


Merged

36732, port of #36675

Doc-only change to make sure existing URLs to RPC docs are still valid.


Merged

@gchanan

https://github.com/pytorch/pytorch/pull/36824

Given that we keep running into the issue of incredibly long compilation times, we decided to switch to the simple executor which skips fusion-related passes and analyses. It might affect some users that rely on fusion on GPU to achieve their performance. profile. We will make a note in our release notes for such users to enable the profiling executor.


Merged

disable flaky test #36924


Merged

https://github.com/pytorch/pytorch/pull/36927 fixes xla job failure on release/1.5 branch. (CI only change)


Merged

@gchanan https://github.com/pytorch/pytorch/pull/36933 should fix the test failures


Merged, but didn't make the RC. So if we do another release or RC it will be in, but otherwise we'll have some known test failures.

36947, port for #36948

Fixes build error when USE_DISTRIBUTED=0. This does not need to be added into our release binaries as they are all built with USE_DISTRIBUTED=1, but needs to be landed into the release/1.5 source branch in case users would like to build using USE_DISTRIBUTED=0.


Didn't make it, this seems minor.

@mrshenli Is this issue getting tracked for release? https://github.com/pytorch/pytorch/issues/36268
PR https://github.com/pytorch/pytorch/pull/36523

No, we weren't confident in the fix so close to the release.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

miguelvr picture miguelvr  路  3Comments

eliabruni picture eliabruni  路  3Comments

SeparateReality picture SeparateReality  路  3Comments

bartolsthoorn picture bartolsthoorn  路  3Comments

soumith picture soumith  路  3Comments