Documentation

Welcome to SigOpt’s developer documentation. If you have a question you can’t answer, feel free to contact us!
This feature is currently in Beta. You can request free access on the main page of the beta or contact us directly for more information.

API Reference & Advanced Usage

This reference includes details on the following methods:

Advanced Usage: Manually creating and ending runs

sigopt.get_parameter(name, default)

Records and returns a parameter value for your Run. Some examples of parameters are learning rate, weight decay, activation function, etc.. When running optimization this function will seamlessly return a value generated from a SigOpt Experiment's Suggestion (if the parameter is defined in the Experiment).

Arguments

NameTypeRequired?Description
namestringYesThe name of the parameter that you would like to use.
defaultstring, numberNoThe value of the parameter to use if there is no other value to use. An exception may be raised if the default is not provided.

Output

TypeDescription
string, numberThe value to use for this parameter in your code. A value will be retrieved in the following order:
  • the suggested optimized value,
  • a value provided via sigopt.set_parameters,
  • the provided default value.
If no value can be found for the parameter then an exception will be raised.

sigopt.set_parameters(values)

Records the values of multiple parameters for your Run. Calling sigopt.get_parameter will look up the parameter value in the provided dictionary of values.

Arguments

NameTypeRequired?Description
valuesdictionary of {string: string or number}YesA dictionary that maps parameter names to the value that should be used.

sigopt.log_dataset(name)

Logs a dataset that will be used for your Run.

Arguments

NameTypeRequired?Description
namestringYesThe name of the dataset you would like to log.

sigopt.log_failure()

Indicates that the Run has failed for any reason. When performing optimization, the associated Observation is marked as a failure.

sigopt.log_metadata(key, value)

This stores any extra information about your Run. sigopt.log_metric is preferred for logging your model outputs and sigopt.get_parameter is preferred for fetching your model parameters.

Arguments

NameTypeRequired?Description
keystringYesThe key for the metadata that you would like to log.
valuenumber,objectYesThe value of the metadata that you would like to log. If value is not a number then it will be logged as a string.

sigopt.log_metric(name, value, stddev=None)

Logs a metric value for your Run. A metric should be a scalar artifact of your model's training and evaluation. You may repeat this call with unique metric names to log values for many metrics. If you log the same metric multiple times then we will only keep the most recent value.

Arguments

NameTypeRequired?Description
namestringYesThe name of the metric that you would like to log.
valuenumberYesThe value of the metric to log.
stddevnumberNoThe standard deviation of the metric to log.

sigopt.log_model(type)

Logs information about your model.

Arguments

NameTypeRequired?Description
typestring, keyword onlyYesThe type of the model that is being logged, for example "RandomForestClassifier" or "xgboost".

sigopt.log_checkpoint(checkpoint_values)

Logs values for a single checkpoint. Log checkpoints across each epoch of your model's training to see a chart of the checkpoints on the runs page.

Arguments

NameTypeRequired?Description
checkpoint_valuesdictionary of {string: number}YesThe values of your checkpoints. The keys of each entry are the names of your checkpoints and the values are the numerical values that you would like to log.

Example

# log a checkpoint for each epoch
for epoch in range(number_of_epochs):
  # compute training and validation loss, then log the checkpoint
  sigopt.log_checkpoint({
    'training_loss': training_loss,
    'validation_loss': validation_loss,
  })

Advanced Usage

sigopt.create_run(name=None, project=None, suggestion=None)

Manually begin recording a Run. This will always attempt to create a new Run via the SigOpt API. Runs should be cleaned up by using a Python context, or by calling run.end() on the returned run when finished. See examples below.

Example

with sigopt.create_run(name='Run example', project='run-examples') as run:
    # log info about your run…
    run.log_dataset(name="my_training_data_v1")
    run.log_model(type="logisticRegression")
    model = make_my_model()
    training_loss = model.train(
      My_training_data,
      lr=sigopt.get_parameter('learning_rate', default=0.1)
    )
    run.log_metric('training loss', training_loss)
    accuracy = model.evaluate(my_validation_data)
    run.log_metric('accuracy', accuracy)

Arguments

NameTypeRequired?Description
namestringNoA helpful name to associate with your recorded Run.
projectstringNoThe project ID to associate your recorded Run with. Defaults to a project with the name of the current directory. If this project does not already exist we will create it for you. Must be an alphanumeric string without spaces (ie match the regular expression /^[a-z0-9\-_\.]+$/).
suggestionstringNoThe ID of the SigOpt Suggestion that you would like to use for this run. Use this when you want to implement your own SigOpt optimization loop.

Output

TypeDescription
sigopt.runs.LiveRunContextUse this in a context manager to indicate the scope of the run, or call the .end() method when you are finished with it. This object has all of the log_* and get_parameter methods mentioned above.

run.end()

This is a method on the object returned by sigopt.create_run . It stops recording the Run. Use this when it is impractical to use a context manager.

Example

run = sigopt.create_run(name='Run example', project='run-examples')
# log info about your run...
run.log_dataset(name="my_training_data_v1")
run.log_model(type="logisticRegression")

model = make_my_model()
training_loss = model.train(
  my_training_data,
  lr=run.get_parameter('learning_rate', 0.1)
)
run.log_metric('training loss', training_loss)

accuracy = model.evaluate(my_validation_data)
run.log_metric('accuracy', accuracy)
# end your run to indicate that it has completed
run.end()

Limitations

At this time, Training Monitor runs are not integrated with Project pages and thus will not appear alongside the runs created by the CLI in data tables and charts. Training Monitor runs will only be seen in Experiment pages.