Hyperparameter optimization for Pytorch model

后端 未结 4 1581
有刺的猬
有刺的猬 2021-01-30 13:48

What is the best way to perform hyperparameter optimization for a Pytorch model? Implement e.g. Random Search myself? Use Skicit Learn? Or is there anything else I am not aware

4条回答
  •  闹比i
    闹比i (楼主)
    2021-01-30 14:33

    Many researchers use RayTune. It's a scalable hyperparameter tuning framework, specifically for deep learning. You can easily use it with any deep learning framework (2 lines of code below), and it provides most state-of-the-art algorithms, including HyperBand, Population-based Training, Bayesian Optimization, and BOHB.

    import torch.optim as optim
    from ray import tune
    from ray.tune.examples.mnist_pytorch import get_data_loaders, ConvNet, train, test
    
    
    def train_mnist(config):
        train_loader, test_loader = get_data_loaders()
        model = ConvNet()
        optimizer = optim.SGD(model.parameters(), lr=config["lr"])
        for i in range(10):
            train(model, optimizer, train_loader)
            acc = test(model, test_loader)
            tune.report(mean_accuracy=acc)
    
    
    analysis = tune.run(
        train_mnist, config={"lr": tune.grid_search([0.001, 0.01, 0.1])})
    
    print("Best config: ", analysis.get_best_config(metric="mean_accuracy"))
    
    # Get a dataframe for analyzing trial results.
    df = analysis.dataframe()
    

    [Disclaimer: I contribute actively to this project!]

提交回复
热议问题