SigOpt is a platform and framework agnostic experiment management and optimization platform.
As you use SigOpt, you'll regularly be creating either:
Both of these are organized within a Project.
A SigOpt Run stores the training and evaluation of a model, so that modelers can see a history of their work. This is the fundamental building block of SigOpt.
Runs record everything you might need to understand how a model was built, reconstitute the model in the future, or explain the process to a colleague.
Common attributes of a Run include:
- the model type,
- dataset identifier,
- evaluation metrics,
- logs, and
- a code reference.
For a complete list of attributes see the API Reference.
Once recorded, runs will appear in your project in a table:
An experiment is an automated search of your model's hyperparameter space, sometimes for the purpose of tuning a model. You define the parameter space and request suggestions from the SigOpt API. Suggestions can be generated randomly, as a grid search, or using an ensemble of Bayesian optimization methods.
Compare individual and experiment runs in the Project Analysis page:
Get started with installing and configuring the SigOpt Python package.