Documentation

Welcome to SigOpt’s developer documentation. If you have a question you can’t answer, feel free to contact us!
This feature is currently in Beta. You can request free access on the main page of the beta or contact us directly for more information.

Overview

SigOpt is a platform and framework agnostic experiment management and optimization platform.

As you use SigOpt, you'll regularly be creating either:

Both of these are organized within a Project.

Training Runs

A SigOpt Run stores the training and evaluation of a model, so that modelers can see a history of their work. This is the fundamental building block of SigOpt.

Runs record everything you might need to understand how a model was built, reconstitute the model in the future, or explain the process to a colleague.

Common attributes of a Run include:

  • the model type,
  • dataset identifier,
  • evaluation metrics,
  • hyperparameters,
  • logs, and
  • a code reference.

For a complete list of attributes see the API Reference.

Training runs can be recorded by integrating code snippets into Python that you run in a notebook or via the command line.

Once recorded, runs will appear in your project in a table:

Experiments

An experiment is an automated search of your model's hyperparameter space, sometimes for the purpose of tuning a model. You define the parameter space and request suggestions from the SigOpt API. Suggestions can be generated randomly, as a grid search, or using an ensemble of Bayesian optimization methods.

Experiments can be instrumented to create one training run per suggestion. The metrics measuring the model's performance are referred to as an observation in the experiment.

Compare individual and experiment runs in the Project Analysis page:

Next

Get started with installing and configuring the SigOpt Python package.