A geometric interpretation of upper confidence bound presented at ICASSP 2018.
We are excited to announce the general availability of Constraints, a feature that gives customers more fine-grained control of an experiment's parameter space.
We are excited to announce SigOpt Organizations, a set of new features for large organizations including capabilities like multiple sub-teams, usage insights, and more fine-grained access controls.
Today, we’re happy to announce a strategic investment and technology development agreement with In-Q-Tel (IQT).
Recent research on the optimality of circulant binary embeddings.
Read about new features like conditional parameters and high parallelism, our research at NIPS, and more in our winter update.
Today, we’re doubling down on our collaboration with AWS through a new feature called AWS PrivateLink.
We presented our state-of-the-art optimization platform to the Barclays executive team.
Gartner has listed SigOpt in its Cool Vendor 2017 report for AI Core Technologies.
How to use positive definite kernels to sample away from the boundary.
Two improvement-based policies and the difference in their behaviors.
How to avoid common mistakes in hyperparameter optimization.
Using Bayesian optimization to dramatically improve the performance of a reinforcement learning algorithm.
How to use Bayesian Optimization to build the optimal mouse trap.
How-to guide to solve common optimization issues.
We closed out our Series A with Andreessen Horowitz leading.
What you need to know to get started with multicriteria optimization.
In this post we will show how to tune an MLlib collaborative filtering pipeline using Bayesian optimization via SigOpt.
We review the history of the tools implemented within SigOpt, and then we discuss the original solution to this black box optimization problem.
Combining Nervana Cloud with the efficient tuning of SigOpt, the promise of easily accessible, production quality deep learning is now a reality for everyone.
In this post we demonstrate the relevance of model tuning on a basic prediction strategy for investing in bond futures.
This blog post provides solutions for comparing different optimization strategies for any optimization problem, using hyperparameter tuning as the motivating example.
We will introduce the concept of functions and discuss what properties make certain functions more dangerous to maximize.
In this post on integrating SigOpt with machine learning frameworks, we will show you how to use SigOpt and TensorFlow to efficiently search for an optimal configuration of a convolutional neural network (CNN).
In this post on integrating SigOpt with machine learning frameworks, we will show you how to use SigOpt and XGBoost to efficiently optimize an unsupervised learning algorithm’s hyperparameters to increase performance on a classification task.
Our v1 API has, through its RESTful design, the clarity and structure to facilitate simultaneous optimization loops.
We used SigOpt to tune features and hyperparameters hoping to find a winning combination that could beat the house.
SigOpt can be used to find the location at which a process is maximized.
This post provides a comparison between minimizing the profile likelihood quantity and the kriging variance quantity
Using Gaussian processes and repeated experimentation to expose the optimal behavior for the user.
We show that SigOpt outperforms both strategies scikit-learn recommends for hyperparameter optimization on this task.
Need for approximation stems from costly experimentation. How can we minimize expenditure?
Defining a Gaussian process requires also defining the covariance kernel. Those who use Gaussian processes are aware of the potential for lousy results when using an inappropriate covariance kernel.
SigOpt was used to optimize an Aerodynamics simulation on Rescale. The simulation calculates the best lift ratio for an airfoil across a range of airspeeds and angles of attack.
Our vision is for SigOpt to eliminate that kind of manual testing, and today, we’ve come a step closer.
Boiling all of the key metrics you care about down to a single value is an important process for achieving your goals.
In-depth optimization research