Archived Documentation

Welcome to the developer documentation for SigOpt. If you have a question you can’t answer, feel free to contact us!
You are currently viewing archived SigOpt documentation. The newest documentation can be found here.

Optimization Loop

The optimization loop is the backbone of using SigOpt. After creating your experiment, run through these three simple steps, in a loop:

Find your SigOpt API token on the API tokens page.

Receive a Suggestion

Create a new Suggestion via the API:

from sigopt import Connection

conn = Connection(client_token="SIGOPT_API_TOKEN")
suggestion = conn.experiments(EXPERIMENT_ID).suggestions().create()

Evaluate Your Metric

The response of the previous API call will include the next parameters for you to try. The endpoint call will suggest parameters that optimize your metric. At this point, you should evaluate your metric with the provided parameters - this can take anywhere from milliseconds to days, so just report back to SigOpt when you're ready. Learn More.

def evaluate_metric(assignments, dataset):
  # Make a model using the new hyperparameters
  model = make_model(assignments)

  # Obtain a metric for the dataset
  return score_model(model, dataset)

Report an Observation

When the metric has been evaluated, report an Observation, replacing the string SUGGESTION_ID with the ID of the suggestion from the first step:

from sigopt import Connection

conn = Connection(client_token="SIGOPT_API_TOKEN")
observation = conn.experiments(EXPERIMENT_ID).observations().create(

SigOpt will accept the data and start optimizing.

Putting it all together

We recommend setting an Observation Budget during Experiment Create, and running the optimization loop until the budget is exhausted. Here is what the full optimization loop may look like for a SigOpt experiment.

experiment = conn.experiments(EXPERIMENT_ID).fetch()

# Run the Optimization Loop until the Observation Budget is exhausted
while experiment.progress.observation_count < experiment.observation_budget:
  # Receive a suggestion
  suggestion = conn.experiments(

  # Evaluate your metric
  value = evaluate_metric(suggestion.assignments, dataset)

  # Report an observation

  # Update the experiment object
  experiment = conn.experiments(
# Fetch the best configuration and explore your experiment
all_best_assignments = conn.experiments(
# Returns a list of dict-like Observation objects
best_assignments =[0].assignments
print("Best Assignments: " + str(best_assignments))
# Access assignment values as:
# parameter_value = best_assignments['parameter_name']
print("Explore your experiment:" + + "/analysis")

What if Something Goes Wrong?

See our documentation on Evaluation Metric Failure to handle unexpected failures and errors during the Optimization Loop.

Next Steps

Learn how to use SigOpt to parallelize your experiments.