Documentation

Welcome to the developer documentation for SigOpt. If you have a question you can’t answer, feel free to contact us!
Welcome to the new SigOpt docs! If you're looking for the classic SigOpt documentation then you can find that here. Otherwise, happy optimizing!

Experiment Management

SigOpt enables you to track and organize your modeling experimentation, trace your decisioning, and reproduce your experimentation. With interactive visuals you can compare training curves, metrics, and models quickly. This in turn helps you understand model performance, inform your intuition and explain results to your colleagues. Learn more about SigOpt’s Experiment Management here.

SigOpt Runs

A SigOpt Run stores a model’s attributes, training checkpoints, and evaluated metrics, so that modelers can see a history of their work. This is the fundamental building block of SigOpt.

Runs record everything you might need to understand how a model was built, reconstitute the model in the future, or explain the process to a colleague.

For a complete list of attributes see the API Reference or go to the SigOpt Runs docs.

SigOpt Runs can be recorded by integrating code snippets into Python that you run in a notebook or via the command line.