Pytorch-lightning: Names of parameters may benefit from not being abbreviated

Created on 15 Aug 2019  路  8Comments  路  Source: PyTorchLightning/pytorch-lightning

I know this is nitpicky, but I think good naming is worth a lot of thought.

A lot of the API seems unhelpfully abbreviated to me, especially since lighting is designed so that you don't have to handle manual details like dataloaders more than necessary.

Names like tng_dataloader don't seem to buy anything over train_dataloader or training_dataloader since they're written only once and read many more times. Really, tng could be replaced with training or train elsewhere too.

data_batch seems redundant, I think it could just be called batch since in context it can only represent data anyway, and batch_nb is already a separate argument.

Describe the solution you'd like

Rename. The library is still in early days.

enhancement help wanted

Most helpful comment

awesome suggestions,

Let's do these:

keep validation_step 
keep training_step
in trainer options use: train, test, val  
for data: val_dataloader, test_dataloader, train_dataloader
keep pct   
keep nb   
data_batch -> batch    
prog -> progress   
keep progress_metrics    
keep dataloader    
keep current_epoch    
gradient_clip -> gradient_clip_val    
keep gpus
add_log_row_interval -> row_log_interval

All 8 comments

@alok sure, can you first list out the name changes here.

from -> to:

example:

tng_dataloader -> train_dataloader
  • tng -> training
  • pct -> percent or frac
  • nb -> num (this one i don't have as strong feelings about)
  • data_batch -> batch
  • prog -> progress
  • outputs in validation_end -> progress_metrics
  • dataloader -> loader (that it's a DataLoader is clear, so the data part is redundant, but also no really strong feeling)
  • current_epoch -> epoch (the only mixup that could be had is with the total number of epochs, and that could be called something like epochs)
  • gradient_clip -> gradient_clip_val (gradient_clip sounds like a boolean indicating whether to clip or not)
  • gpus in trainer -> gpu_ids
  • add_log_row_interval -> row_log_interval

awesome suggestions,

Let's do these:

keep validation_step 
keep training_step
in trainer options use: train, test, val  
for data: val_dataloader, test_dataloader, train_dataloader
keep pct   
keep nb   
data_batch -> batch    
prog -> progress   
keep progress_metrics    
keep dataloader    
keep current_epoch    
gradient_clip -> gradient_clip_val    
keep gpus
add_log_row_interval -> row_log_interval

I propose to rename update_tng_log_metrics to update_train_log_metrics

I'll hold off on this until #146 is resolved, since it affects this.

Let's do these:

gradient_clip -> gradient_clip_val    

I suggest gradient_clip_norm, because pytorch has torch.nn.utils.clip_grad_value_, which clips individual partial derivatives using torch.clamp and it would be confusing.

Merged #124

Was this page helpful?
0 / 5 - 0 ratings

Related issues

anthonytec2 picture anthonytec2  路  3Comments

versatran01 picture versatran01  路  3Comments

maxime-louis picture maxime-louis  路  3Comments

DavidRuhe picture DavidRuhe  路  3Comments

williamFalcon picture williamFalcon  路  3Comments