Solving Common Issues in Distributed Hyperparameter Optimization
You’ve spent time gathering data and feature engineering and now you’re ready to pick a model and maximize its performance. The next step is to optimize the hyperparameters of your model. You’ve chosen Bayesian optimization for hyperparameter optimization because it has been shown to achieve better results than than grid search and random search, while also requiring fewer evaluations of your model .
Since you probably want to complete hyperparameter optimization in a reasonable amount of time, you’re considering some method of distributing the total work across a cluster. If you’re considering distributing the model itself over the cluster, Netflix has a great blog post about the tradeoffs involved in that decision . For now, let’s assume that model training is contained on a single machine, and individual workers train new models for each set of hyperparameters, in a setup similar to the one below.
However, you still have a few problems:
- Distributed compute management and monitoringProgress monitoring
- Manual interventions and controlCustom hyperparameters
We’ll show you how you can use SigOpt’s API for Bayesian optimization to manage your cluster and solve both of these problems.
Problem #1: Progress Monitoring
The performance benefits of Bayesian optimization come at the cost of additional bookkeeping. Unlike grid search or random search, a master node will also need to store the model performance associated with each set of hyperparameters. Because of this added complexity, the development of a distributed Bayesian hyperparameter optimization cluster requires simultaneously debugging both your job scheduler and the health of your cluster. Spark and AWS offer cluster monitoring tools for the health of your machines, but you need an at-a-glance answer to the question “Is my hyperparameter optimization running smoothly?”
SigOpt’s answer is visualization. In your hyperparameter optimization cluster, each node will repeatedly execute the follow steps:
- Receive a suggested configuration of hyperparameters
- Evaluate the model using the suggested hyperparameters
- Report back an observation on the model’s performance
When using SigOpt, steps 1 and 3 are both simple API calls. We built a dashboard of useful graphs on top of information that we infer from API calls, such as the time an API call is made.
SigOpt’s web dashboards visualize the progress of your model’s performance, so you can quickly diagnose potential system failings. If a classifier’s accuracy was 80% with default hyperparameters but stayed flat at 50% after 20 model evaluations, you might have a problem.
In addition, SigOpt’s web dashboards visualize the number of performance observations from step 3 that have been reported. Our API allows you to report failed performance for any reason, and we include this data in our visualization. You can also use metadata to slice your data by hostname. It’s easy to glance at a chart, see red in the column for host-2, and know that something is wrong.
Problem #2: Custom Hyperparameters
Now, let’s say you’ve got one particular set of hyperparameters that you’re dying to try out because your intuition tells you it’s gonna be great. The problem is, you’ve already started the distributed hyperparameter optimization and you don’t want to shut anything down. At SigOpt, we have designed our API to act as a distributed job scheduler. We manage storing, queueing, and serving hyperparameter configurations, allowing you to run the same optimization code on every node, whether in the sequential or parallel workflow. This centralized design allowed us to easily build a feature for queuing up customer hyperparameter configurations.
From the website, navigate to an experiment’s monitoring dashboard to queue a suggestion. It will automatically be included in the distributed hyperparameter jobs that are sent out via the API, with no code changes necessary on your end. It even works on mobile.
SigOpt’s API for hyperparameter optimization leaves us well-positioned to build exciting features for anyone who wants to perform Bayesian hyperparameter optimization in parallel. We offer an easy-to-use REST API with R, Java and Python clients, and all data that is passed through the API is available on both desktop and mobile websites. We’re constantly building new features and visualizations for monitoring your progress and exploring your data.