Ray: How is the best configuration selected from all trials?

Created on 31 Jul 2020  路  4Comments  路  Source: ray-project/ray

What is your question?

From the code below, I understand that train_mnist is called every trial while train is called every training iteration. It can be seen that tune.report submit mean_accuracy every training iteration. But I don't know whether every trial will select its best training iteration?

For example, every trial runs 10 iterations andtune.report reports 10 different mean_accuracy while only the 5th mean_accuracy is the best. Because analysis.get_best_config can only get the best mean_accuracy among all the trials, it is not clear whether the best mean_accuracy is the best training iteration of that best trial?

Or just the mean_accuracy of last training iteration (the tenth iteration in this case) of that trial?

def train_mnist(config):
    use_cuda = config.get("use_gpu") and torch.cuda.is_available()
    device = torch.device("cuda" if use_cuda else "cpu")
    train_loader, test_loader = get_data_loaders()
    model = ConvNet().to(device)
    optimizer = optim.SGD(
        model.parameters(), lr=config["lr"], momentum=config["momentum"])

    while True:
        train(model, optimizer, train_loader, device)
        acc = test(model, test_loader, device)
        tune.report(mean_accuracy=acc)

Ray version and other system information (Python version, TensorFlow version, OS):
Python 3.7.4
Ray: 0.8.6
OS: linux

question

Most helpful comment

Yeah, "all" is what you are looking for.

All 4 comments

Great question - you can use analysis.get_best_config(scope="max") to get the best over all iterations.

Great question - you can use analysis.get_best_config(scope="max") to get the best over all iterations.

Thanks very much! The values for scope only support one of [all, last, avg, last-5-avg, last-10-avg]. Please look at this documentation for get_best_config here: https://docs.ray.io/en/master/tune/api_docs/analysis.html

It says

If scope=all, find each trial鈥檚 min/max score for metric based on mode, and compare trials based on mode=[min,max].

Does this mean that, if mode='max' for metric='mean_accuracy', then the best configuration for the current trial is determined based on the maximum mean_accuracy among all training iterations?

Yeah, "all" is what you are looking for.

Yeah, "all" is what you are looking for.

Thanks very much. I got it.

Was this page helpful?
0 / 5 - 0 ratings