import torch
from torch.autograd import Variable
a = Variable(torch.Tensor([1, 2]))
if torch.cuda.is_available():
b = a.cuda()
b_data = b.data
b_grad = b.grad
b_grad_data = b.grad.data
If .data
and .grad
are class attributes, then .cuda()
can be kept as property so that one a do b = a.cuda
. I am making suggestion thinking that it would increase API consistency, as .cuda()
, .data
and .grad
are torch.Tensor
so cuda() can be changed to a property. By doing this, the GPU residing model can be accessed by model.cuda
cuda
is made a method since it is a relatively expensive operation, which involves copying data from CPU to GPU. While grad
and data
are cheap operations (O(1)
), which only retrieve references to the corresponding Tensor. I think making cuda
a method is reasonable.
.cuda() copies CPU data to GPU. You probably don't want to keep the data in GPU all the time. That means, you only store data in GPU when it's really necessary.
as mentioned by both above comments, making .cuda
a property is inappropriate. We return new objects when referencing .cuda()
and a lot is happening.
Most helpful comment
cuda
is made a method since it is a relatively expensive operation, which involves copying data from CPU to GPU. Whilegrad
anddata
are cheap operations (O(1)
), which only retrieve references to the corresponding Tensor. I think makingcuda
a method is reasonable.