SigOpt Winter Update

Happy New Year!

2017 was a great year for SigOpt. We were selected by a number of the leading firms in algorithmic trading, banking, and technology to improve their optimization processes. We announced a number of strategic partnerships with companies like AWS, Intel AI, NVIDIA, and SAP. We won a number of industry awards, such as the Barclays Innovation Challenge Award and the CB Insights AI 100, and were recognized as a Gartner Cool Vendor in AI. We are looking forward to an even better 2018!

In this update, we will discuss newly released features, showcase content we have produced, published, or been cited in, and share some of the interesting machine learning research that our research team came across while presenting at NIPS.

We hope you find this valuable and informative!

‍New SigOpt Features

PrivateLink allows users that are operating within an Amazon VPC to create a private endpoint within their VPC to our service. This ensures that customer communication with the SigOpt platform—while already secure via our "black box" nature—is further secured, as the customer connection to SigOpt is entirely within the AWS infrastructure and strictly "one way." The customer would receive a private endpoint within their VPC (just as they do for any AWS service), and never need to communicate with SigOpt across the open internet.

Conditional Parameters* allow customers to create dynamic parameter spaces, where the setting of a given parameter affects the existence of another parameter(s). As a simple example, consider a neural network where the objective is to tune both the number of hidden layers and the number of nodes in each hidden layer. At each iteration, SigOpt will suggest a value (M) for the number of hidden layers parameter, and only return the parameter values for the number of nodes in layers 1 through M (see figure 1).

Figure 1: a conditional case comparing scenarios where the parameter for “number of hidden layers” is 3 (left) vs 1 (right), and the impact that has on the neurons_layerX parameters.

High Parallelism* allows users to gain access to more than 10 simultaneously open suggestions (i.e. parameter configurations), which can then be trained, evaluated, and reported to SigOpt independently of each other. This enables customers with access to highly scalable compute environments—whether in their own datacenter or in the cloud—to take advantage of those resources to accelerate the convergence of their models.

* This feature is in customer beta

SigOpt in the Wild

One of the largest updates from the last few months is the strategic partnership we have forged with Amazon. SigOpt is now available in the AWS Marketplace, we were named to the inaugural AWS Machine Learning Competency Partner Program at re:Invent, and we were one of the five launch partners for the PrivateLink service announcement (as described in the new features section).

Our research team also went to NIPS in December, where our own Ruben Martinez-Cantin was one of the organizers of the BayesOpt Workshop and we presented posters at three workshops:

We also want to thank the Intel team for hosting us in their booth and allowing us to demo our platform!

Conferences

At this time, there are not any conferences we are committed to in the Jan-March time frame. However, we expect to be in both Washington, D.C. and New York City at least once, and would love to meet up. Anything from a meeting, to presenting to a team, to happy hour. Drop us a line if you are interested, and we will be sure to schedule as our plans are confirmed.

Interesting ML/AI Research

These are some of the papers from NIPS that caught the eye of our research team as interesting developments in the world of machine learning, with their notes: 

Dynamic Routing Between Capsules - This paper is probably the most important paper in deep learning for computer vision (and related fields) since convolutional NN. It presents the idea of a layer within a layer and dynamic routing of information within the network. This allows affine invariance and feature agreement, which are two important features missing in CNN.

Predictive State RNNs & Predictive State Decoders - There were two papers on the use of recurrent neural networks for predictive state representations in the conference. These networks can be used both for predicting the future in dynamical systems, but also to integrate concepts from probabilistic filtering.

GANs, GANs, and more GANs (and a Bayesian GAN) - Generative Adversarial Networks are a very popular topic, and many of the papers at NIPS were about GANs. These are a few that we thought were the most interesting, specifically from the theoretical/novelty point of view. We especially enjoyed the Bayesian GAN paper being a natural extension on both fields (the expressiveness of Bayesian methods and the power of GANs).

One and few shot learning - Meta-learning is a growing field in interest. Within meta-learning, the concept of one (few) shot learning is attracting more and more attention. We have selected our two favorites in the proceedings.

Happy optimizing,

The SigOpt Team