I know this is nitpicky, but I think good naming is worth a lot of thought.
A lot of the API seems unhelpfully abbreviated to me, especially since lighting is designed so that you don't have to handle manual details like dataloaders more than necessary.
Names like tng_dataloader don't seem to buy anything over train_dataloader or training_dataloader since they're written only once and read many more times. Really, tng could be replaced with training or train elsewhere too.
data_batch seems redundant, I think it could just be called batch since in context it can only represent data anyway, and batch_nb is already a separate argument.
Describe the solution you'd like
Rename. The library is still in early days.
@alok sure, can you first list out the name changes here.
from -> to:
example:
tng_dataloader -> train_dataloader
tng -> trainingpct -> percent or fracnb -> num (this one i don't have as strong feelings about)data_batch -> batchprog -> progressoutputs in validation_end -> progress_metricsdataloader -> loader (that it's a DataLoader is clear, so the data part is redundant, but also no really strong feeling)current_epoch -> epoch (the only mixup that could be had is with the total number of epochs, and that could be called something like epochs)gradient_clip -> gradient_clip_val (gradient_clip sounds like a boolean indicating whether to clip or not)gpus in trainer -> gpu_idsadd_log_row_interval -> row_log_intervalawesome suggestions,
Let's do these:
keep validation_step
keep training_step
in trainer options use: train, test, val
for data: val_dataloader, test_dataloader, train_dataloader
keep pct
keep nb
data_batch -> batch
prog -> progress
keep progress_metrics
keep dataloader
keep current_epoch
gradient_clip -> gradient_clip_val
keep gpus
add_log_row_interval -> row_log_interval
I propose to rename update_tng_log_metrics to update_train_log_metrics
I'll hold off on this until #146 is resolved, since it affects this.
Let's do these:
gradient_clip -> gradient_clip_valI suggest gradient_clip_norm, because pytorch has torch.nn.utils.clip_grad_value_, which clips individual partial derivatives using torch.clamp and it would be confusing.
Merged #124
Most helpful comment
awesome suggestions,
Let's do these: