Optimization Loop

The optimization loop is the backbone of using SigOpt. After creating your experiment, run through these three simple steps, in a loop:

Find your SigOpt API token on the API tokens page.

Receive a Suggestion

Create a new Suggestion via the API:

from sigopt import Connection

conn = Connection(client_token="SIGOPT_API_TOKEN")
conn.set_api_url("https://api.sigopt.com")

suggestion = conn.experiments(EXPERIMENT_ID).suggestions().create()

Evaluate Your Metric

The response of the previous API call will include the next parameters for you to try. The endpoint call will suggest parameters that optimize your metric. At this point, you should evaluate your metric with the provided parameters - this can take anywhere from milliseconds to days, so just report back to SigOpt when you're ready. Learn More.

def evaluate_metric(assignments, dataset):
  # Make a model using the new hyperparameters
  model = make_model(assignments)

  # Obtain a metric for the dataset
  return score_model(model, dataset)

Report an Observation

When the metric has been evaluated, report an Observation, replacing the string SUGGESTION_ID with the ID of the suggestion from the first step:

from sigopt import Connection

conn = Connection(client_token="SIGOPT_API_TOKEN")
conn.set_api_url("https://api.sigopt.com")

observation = conn.experiments(EXPERIMENT_ID).observations().create(
  suggestion="SUGGESTION_ID",
  value="YYY"
  )

SigOpt will accept the data and start optimizing.

Putting it all together

We recommend setting an Observation Budget during Experiment Create, and running the optimization loop until the budget is exhausted. Here is what the full optimization loop may look like for a SigOpt experiment.

experiment = conn.experiments(EXPERIMENT_ID).fetch()

# Run the Optimization Loop until the Observation Budget is exhausted
while experiment.progress.observation_count < experiment.observation_budget:
    # Receive a suggestion
    suggestion = conn.experiments(experiment.id).suggestions().create()

    # Evaluate your metric
    value = evaluate_metric(suggestion.assignments, dataset)

    # Report an observation
    conn.experiments(experiment.id).observations().create(
      suggestion=suggestion.id,
      value=value,
    )

    # Update the experiment object
    experiment = conn.experiments(experiment.id).fetch()
    
# Fetch the best configuration and explore your experiment
all_best_assignments = conn.experiments(experiment.id).best_assignments().fetch()
# Returns a list of dict-like Observation objects
best_assignments = all_best_assignments.data[0].assignments
print("Best Assignments: " + str(best_assignments))
# Access assignment values as:
#   parameter_value = best_assignments['parameter_name']
print("Explore your experiment: https://app.sigopt.com/experiment/" + experiment.id + "/analysis")

What if Something Goes Wrong?

See our documentation on Evaluation Metric Failure to handle unexpected failures and errors during the Optimization Loop.

Next Steps

Learn how to use SigOpt to parallelize your experiments.