Archived Documentation

Welcome to the developer documentation for SigOpt. If you have a question you can’t answer, feel free to contact us!
You are currently viewing archived SigOpt documentation. The newest documentation can be found here.

Bring Your Own Optimizer

At SigOpt, we have invested thousands of hours in developing a powerful optimization engine to efficiently conduct the hyperparameter search process. We understand, however, that some users prefer to bring their own strategies to the model development process — SigOpt fully supports this. In this page, we explain how you can use your own optimization tool and store the progress in SigOpt to inspect and visualize on our web dashboard.

Show me the code

See the notebook linked below for a complete demonstration for Manual Search, Grid Search, Random Search, Optuna and Hyperopt. The content below on this page previews the visualizations generated by that notebook, as well as brief code snippets.

Goal: Visualize and store results across optimization approaches

We will track each iteration of each of the above optimization loops as a SigOpt run, and group all of the Runs into a SigOpt project. We will then visualize the results of all the runs in the project together without writing any plotting code. Note that if you have custom plots in your code you can also attach these to runs and store them in the SigOpt backend with any other metadata you decide to attach to this run — for more detail please see the runs API reference.

Create Custom Charts
Project Metrics Overview
Project Analysis Page

Logging run artifacts with SigOpt

Throughout the documentation we will use the following function as a helper in our various optimization loops. Please see here to learn more about experiment management in SigOpt:

def log_run_utility_function(parameter_assignments, metrics_payload, RUN_METADATA):
   global TOTAL_RUN_COUNTER
   TOTAL_RUN_COUNTER += 1
   run_name = f"{RUN_METADATA['OPTIMIZATION APPROACH']} | {TOTAL_RUN_COUNTER}"
   with sigopt.create_run(name=run_name, project=project_id) as run:
       # log observation metadata with the Run
       [run.log_metadata(k,v) for k,v in RUN_METADATA.items()]
       # link parameter values to the Run
       [run.get_parameter(key, default=value) for key, value in parameter_assignments.items()]
       # link metric values to the Run
       run.log_metric("Function Value", metrics_payload[0]['value'])

How to create and log runs

To enable flexible creation of a customized random process for generating suggested configurations, SigOpt allows users to create suggestions for the user-generated (i.e. “randomly generated” in this case) values of each parameter that needs an assignment. Using the API in this way allows you to run your random suggestion process for as many observations as you wish during an experiment, and then to continue the experiment with other optimization approaches.

for _ in range(NUM_CUSTOM_RANDOM_OBSERVATIONS):
   x = np.random.uniform(low=DOMAIN_MIN, high=DOMAIN_MAX, size=1)[0]
   y = np.random.uniform(low=DOMAIN_MIN, high=DOMAIN_MAX, size=1)[0]
   parameter_assignments = dict(x=x, y=y)
   function_value = himmelblausFunction(x, y)
   metrics_payload = [dict(name = "Function Value", value = function_value)]
   log_run_utility_function(parameter_assignments, metrics_payload, OBSERVATION_METADATA)

Note: For users familiar with the API for creating SigOpt Experiments, you can have SigOpt handle your random search by creating the experiment object as follows:

sigopt.Connection(client_token=CLIENT_TOKEN).experiments().create(..., type=”random”)

Users familiar with Suggestion, Observation API

For SigOpt users who are familiar with our API endpoints, the example Google Colab notebook linked below leverages the Suggestion, Observation, and Experiment API calls to track and visualize results from various optimizers.