SigOpt Fall Update

This is the first edition of our new quarterly newsletter. In these updates, we will discuss newly released features, showcase content we have produced, published, or been cited in, and share interesting machine learning research that our research team has found. We hope you find these valuable and informative!

‍New SigOpt Features

Multimetric Optimization allows customers to optimize across multiple objective functions. The SigOpt dashboard traces out the Pareto frontier, enabling domain experts to choose how to trade off between the competing metrics. The multimetric feature is great for optimizing a deep learning model where one must trade off between model accuracy and inference time.

When running a Multimetric experiment, the SigOpt dashboard shoes the Pareto frontier of the function

Linear Constraints* enable users to enforce a linear relationship between 2 or more continuous parameters. For example, you may want the sum of various parameters to always be less than a given number. An example use case is optimizing advertising spend, where budget is finite and the hyperparameters represent budget allocation across different channels.

Multisolution Optimization allows users to locate multiple solution configurations that are similar in performance, but some meaningful distance apart in the parameter space. This is helpful when you want a portfolio of different “good” configurations to investigate further or build an ensemble method with. Algorithmic trading firms can use SigOpt multisolution feature to locate multiple trading strategies.

* This feature is in customer beta

SigOpt in the Wild

Here are a few technical blog posts and research papers that we've recently published:

In Fast CNN Tuning with AWS GPU Instances and SigOpt, we showed that SigOpt can tune a neural network 10x times faster than conventional methods.

In Deep Learning Hyperparameter Optimization with Competing Objectives, we tuned deep neural networks to jointly optimize two metrics: model accuracy and inference time.

From our NVIDIA blog post: compared to random search, SigOpt produces a more Pareto-efficient confidence region given the same computational budget.

Our research team went to ICML in August, and they presented their work on Active Preference Learning for Personalized Portfolio Construction.

Our CEO Scott Clark sat down with the This Week in Machine Learning & A.I. podcast to discuss topics like Exploration vs. Exploitation, Bayesian Regression, Heterogeneous Configuration Models, and Covariance Kernels.

For our Academic customers: have you recently presented or published work where you used SigOpt? Let us know, and we'd be happy to feature it in future newsletters!

Interesting ML/AI Research

Papers that have have caught the eye of our research team as interesting developments in the world of machine learning, with their notes:

An Alternative Softmax Operator for Reinforcement Learning - This paper proposes a new softmax operator, named Mellowmax, as an alternative to the Boltzmann softmax operator for reinforcement learning problems. The newly proposed operator exhibits several desirable properties (e.g. differentiability, non-expansion). In particular, the Mellowmax operator circumvents unstable value function approximation in the Boltzmann policy.

Wasserstein Generative Adversarial Networks - This paper makes great headway into solving the mode collapse problem in GANs using the Earth Mover (EM) distance between two distributions.

A Closer Look at Memorization in Deep Networks - An analysis into understanding how and when deep neural networks generalize and memorize. It looks into the ability of a neural network to distinguish between random features and hierarchical representations.

Sharp Minima Can Generalize for Deep Nets - Interesting paper which argues that a sharp minima can generalize well to unseen data, and, conversely, that a flat minima can fail to generalize to unseen data. A reparameterization of the function can produce arbitrarily sharp minima for an equivalent model.

If you're interested in using any of the new features or learning more about any of the content we've published, contact your account team. Thanks for reading our newsletter, and stay tuned for new developments!