The optimization loop is the backbone of using SigOpt. After creating your experiment, run through these three simple steps, in a loop:
Find your SigOpt API token on the API tokens page.
Receive a Suggestion
Create a new Suggestion via the API:
Evaluate Your Metric
The response of the previous API call will include the next parameters for you to try. The endpoint call will suggest parameters that maximize your metric. At this point, you should evaluate your metric with the provided parameters - this can take anywhere from milliseconds to days, so just report back to SigOpt when you're ready. Learn More.
def evaluate_metric(assignments, dataset): # Make a model using the new hyperparameters model = make_model(assignments) # Obtain a metric for the dataset return score_model(model, dataset)
Report an Observation
When the metric has been evaluated, report an Observation, replacing the string
SUGGESTION_ID with the ID of the suggestion from the first step:
SigOpt will accept the data and start optimizing.
Putting it all together
We recommend setting an Observation Budget during Experiment Create, and running the optimization loop until the budget is exhausted. Here is what the full optimization loop may look like for a SigOpt experiment.
experiment = conn.experiments(EXPERIMENT_ID).fetch() # Run the Optimization Loop until the Observation Budget is exhausted while experiment.progress.observation_count < experiment.observation_budget: # Receive a suggestion suggestion = conn.experiments(experiment.id).suggestions().create() # Evaluate your metric value = evaluate_metric(suggestion.assignments, dataset) # Report an observation conn.experiments(experiment.id).observations().create( suggestion=suggestion.id, value=value, ) # Update the experiment object experiment = conn.experiments(experiment.id).fetch() # Fetch the best configuration and explore your experiment best_assignments = conn.experiments(EXPERIMENT_ID).best_assignments().fetch().data.assignments print("Best Assignments: " + best_assignments) print("Explore your experiment: https://app.sigopt.com/experiment/" + experiment.id + "/analysis")
What if Something Goes Wrong?
See our documentation on Evaluation Metric Failure to handle unexpected failures and errors during the Optimization Loop.
Learn how to use SigOpt to parallelize your experiments.