API Reference & Advanced Usage
This reference includes details on the following methods:
- sigopt.get_parameter(name, default)
- sigopt.set_parameters(values)
- sigopt.log_dataset(name)
- sigopt.log_failure()
- sigopt.log_image(image, name=None)
- sigopt.log_metadata(key, value)
- sigopt.log_metric(name, value, stddev=None)
- sigopt.log_model(type)
- sigopt.log_checkpoint(checkpoint_values)
Advanced Usage: Manually creating and ending runs
sigopt.get_parameter(name, default)
Records and returns a parameter value for your Run. Some examples of parameters are learning rate, weight decay, activation function, etc.. When running optimization this function will seamlessly return a value generated from a SigOpt Experiment's Suggestion (if the parameter is defined in the Experiment).
Arguments
Name | Type | Required? | Description |
---|---|---|---|
name | string | Yes | The name of the parameter that you would like to use. |
default | string , number | No | The value of the parameter to use if there is no other value to use. An exception may be raised if the default is not provided. |
Output
Type | Description |
---|---|
string , number | The value to use for this parameter in your code. A value will be retrieved in the following order:
|
sigopt.set_parameters(values)
Records the values of multiple parameters for your Run. Calling sigopt.get_parameter
will look up the parameter value in the provided dictionary of values.
Arguments
Name | Type | Required? | Description |
---|---|---|---|
values | dictionary of {string: string or number} | Yes | A dictionary that maps parameter names to the value that should be used. |
sigopt.log_dataset(name)
Logs a dataset that will be used for your Run.
Arguments
Name | Type | Required? | Description |
---|---|---|---|
name | string | Yes | The name of the dataset you would like to log. |
sigopt.log_failure()
Indicates that the Run has failed for any reason. When performing optimization, the associated Observation is marked as a failure.
sigopt.log_image(image, name=None)
Uploads an image artifact for your run. See the image
argument description for a list of compatible inputs.
Arguments
Name | Type | Required? | Description |
---|---|---|---|
image | string , PIL.Image.Image , matplotlib.figure.Figure or numpy.ndarray | Yes | The image artifact that you would like to log.
|
name | string | No* | A name for the uploaded image. *Required only if the provided image is not a file path. |
sigopt.log_metadata(key, value)
This stores any extra information about your Run. sigopt.log_metric is preferred for logging your model outputs and sigopt.get_parameter is preferred for fetching your model parameters.
Arguments
Name | Type | Required? | Description |
---|---|---|---|
key | string | Yes | The key for the metadata that you would like to log. |
value | number ,object | Yes | The value of the metadata that you would like to log. If value is not a number then it will be logged as a string. |
sigopt.log_metric(name, value, stddev=None)
Logs a metric value for your Run. A metric should be a scalar artifact of your model's training and evaluation. You may repeat this call with unique metric names to log values for many metrics. If you log the same metric multiple times then we will only keep the most recent value.
Arguments
Name | Type | Required? | Description |
---|---|---|---|
name | string | Yes | The name of the metric that you would like to log. |
value | number | Yes | The value of the metric to log. |
stddev | number | No | The standard deviation of the metric to log. |
sigopt.log_model(type)
Logs information about your model.
Arguments
Name | Type | Required? | Description |
---|---|---|---|
type | string , keyword only | Yes | The type of the model that is being logged, for example "RandomForestClassifier" or "xgboost". |
sigopt.log_checkpoint(checkpoint_values)
Logs values for a single checkpoint. Log checkpoints across each epoch of your model's training to see a chart of the checkpoints on the runs page.
Arguments
Name | Type | Required? | Description |
---|---|---|---|
checkpoint_values | dictionary of {string: number} | Yes | The values of your checkpoints. The keys of each entry are the names of your checkpoints and the values are the numerical values that you would like to log. |
Example
# log a checkpoint for each epoch
for epoch in range(number_of_epochs):
# compute training and validation loss, then log the checkpoint
sigopt.log_checkpoint({
'training_loss': training_loss,
'validation_loss': validation_loss,
})
Advanced Usage
sigopt.create_run(name=None, project=None, suggestion=None)
Manually begin recording a Run. This will always attempt to create a new Run via the SigOpt API. Runs should be cleaned up by using a Python context, or by calling run.end() on the returned run when finished. See examples below.
Example
with sigopt.create_run(name='Run example', project='run-examples') as run:
# log info about your run…
run.log_dataset(name="my_training_data_v1")
run.log_model(type="logisticRegression")
model = make_my_model()
training_loss = model.train(
My_training_data,
lr=sigopt.get_parameter('learning_rate', default=0.1)
)
run.log_metric('training loss', training_loss)
accuracy = model.evaluate(my_validation_data)
run.log_metric('accuracy', accuracy)
Arguments
Name | Type | Required? | Description |
---|---|---|---|
name | string | No | A helpful name to associate with your recorded Run. |
project | string | No | The project ID to associate your recorded Run with. Defaults to a project with the name of the current directory. If this project does not already exist we will create it for you. Must be an alphanumeric string without spaces (ie match the regular expression /^[a-z0-9\-_\.]+$/). |
suggestion | string | No | The ID of the SigOpt Suggestion that you would like to use for this run. Use this when you want to implement your own SigOpt optimization loop. |
Output
Type | Description |
---|---|
sigopt.runs.LiveRunContext | Use this in a context manager to indicate the scope of the run, or call the .end() method when you are finished with it. This object has all of the log_* and get_parameter methods mentioned above. |
run.end()
This is a method on the object returned by sigopt.create_run
. It stops recording the Run. Use this when it is impractical to use a context manager.
Example
run = sigopt.create_run(name='Run example', project='run-examples')
# log info about your run...
run.log_dataset(name="my_training_data_v1")
run.log_model(type="logisticRegression")
model = make_my_model()
training_loss = model.train(
my_training_data,
lr=run.get_parameter('learning_rate', 0.1)
)
run.log_metric('training loss', training_loss)
accuracy = model.evaluate(my_validation_data)
run.log_metric('accuracy', accuracy)
# end your run to indicate that it has completed
run.end()
Limitations
At this time, Training Monitor runs are not integrated with Project pages and thus will not appear alongside the runs created by the CLI in data tables and charts. Training Monitor runs will only be seen in Experiment pages.