Documentation

Welcome to the developer documentation for SigOpt. If you have a question you can’t answer, feel free to contact us!
Welcome to the new SigOpt docs! If you're looking for the classic SigOpt documentation then you can find that here. Otherwise, happy optimizing!

Optimize Your Model

A key component of the SigOpt Platform is the ability to go from tracking your model with SigOpt Runs to optimizing that very same model with minimal changes to your code.

At a high level, a SigOpt Experiment is a grouping of SigOpt Runs and is defined by user-defined parameter and metric spaces. A SigOpt Experiment has a budget that is used to determine the number of hyperparameter tuning loops to conduct. Each hyperparameter loop produces a SigOpt Run with suggested assignments for each parameter. Different sets of hyperparameter values are suggested by either SigOpt algorithms and/or the user with the goal of finding the optimal set(s) of hyperparameters. Overtime, when using the SigOpt Optimizer, you can expect your model to perform better on your metrics.

The Optimization Loop

There are 3 core steps in the optimization loop:

Create a SigOpt Experiment

experiment = sigopt.experiment_create(
  name="Keras Model Optimization (Python)",
  type="offline",
  parameters=[
    dict(name="hidden_layer_size", type="int", bounds=dict(min=32, max=128)),
    dict(name="activation_function", type="categorical", categorical_values=["relu", "tanh"]),
  ],
  metrics=[dict(name="holdout_accuracy", objective="maximize")],
  parallel_bandwidth=1,
  budget=30,
)

Iterate over your Experiment

You can iterate over your Experiment in 2 ways. Each optimization loop produces a SigOpt Run with suggested assignments for each parameter.

for run in experiment.loop():
  with run:
    # execute model
    # evaluate model
    # report metric values to SigOpt
while not experiment.is_finished():
  with experiment.create_run() as run():
    # execute model
    # evaluate model
    # report metric values to SigOpt

Report metric values to SigOpt

run.log_metric("holdout_accuracy", holdout_accuracy)

Putting it all together

experiment = sigopt.experiment_create(
  name="Keras Model Optimization (Python)",
  type="offline",
  parameters=[
    dict(name="hidden_layer_size", type="int", bounds=dict(min=32, max=128)),
    dict(name="activation_function", type="categorical", categorical_values=["relu", "tanh"]),
  ],
  metrics=[dict(name="holdout_accuracy", objective="maximize")],
  parallel_bandwidth=1,
  budget=30,
)

for run in experiment.loop():
  with run:
    holdout_accuracy = execute_keras_model(run)
    run.log_metric("holdout_accuracy", holdout_accuracy)

# get the best Runs for the Experiment
best_runs = experiment.get_best_runs()

The rest of the Experiment docs will walk through how to set up your SigOpt Experiment, how you can bring your own optimizer, and advanced features you can leverage for your optimization.