Documentation

Welcome to SigOpt’s developer documentation. If you have a question you can’t answer, feel free to contact us!

Prior Beliefs

Users often have prior beliefs on how metric values might behave for certain parameters; this knowledge could be derived from domain expertise, similar models trained on different datasets, or certain known structure of the problem itself. Now, SigOpt can take advantage of these prior beliefs on parameters to make the optimization process more efficient.

Defining the Prior Belief

Prior belief can be defined through the prior field for each parameter. Specifically, users can specify the appropriate distribution of the parameter. By default, SigOpt assumes all parameters are uniformly distributed. Users can inspect the probability density function of the prior belief distributions and generate the corresponding code snippet using the interactive tool below.

When the prior belief is set for a parameter, SigOpt is more likely to generate suggestions from regions of this parameter with high probability density (pdf) value. Generally speaking, parameter assignments with pdf value 2 are twice as likely to be suggested as those with pdf value 1. The effect of prior belief is most notable during the initial portion of an experiment.

Normal
{
  "bounds": {
    "max": 3,
    "min": -3
  },
  "name": "x",
  "prior": {
    "mean": 0,
    "name": "normal",
    "scale": 1
  },
  "type": "double"
}

Creating an Experiment with Prior Beliefs

Below, we create a new experiment with a Beta prior belief on the log learning rate and a Normal prior belief on the subsampling ratio.

from sigopt import Connection

conn = Connection(client_token="SIGOPT_API_TOKEN")
experiment = conn.experiments().create(
  name="xgboost with prior beliefs",
  parameters=[
    dict(
      name="log10_learning_rate",
      bounds=dict(
        min=-4,
        max=0
        ),
      prior=dict(
        name="beta",
        shape_a=2,
        shape_b=4.5
        ),
      type="double"
      ),
    dict(
      name="max_depth",
      bounds=dict(
        min=3,
        max=12
        ),
      type="int"
      ),
    dict(
      name="colsample_bytree",
      bounds=dict(
        min=0,
        max=1
        ),
      prior=dict(
        name="normal",
        mean=0.6,
        scale=0.15
        ),
      type="double"
      )
    ],
  metrics=[
    dict(
      name="AUPRC",
      objective="maximize",
      strategy="optimize"
      )
    ],
  observation_budget=65,
  parallel_bandwidth=2,
  project="sigopt-examples",
  type="offline"
  )
print("Created experiment: https://app.sigopt.com/experiment/" + experiment.id)

Updating Prior Beliefs

During the progress of an experiment, you can change your belief on how a particular parameter is distributed. The prior beliefs can be updated directly through our API. An example of this is given below, adjusting the prior belief on the learning rate.

experiment = conn.experiments(experiment.id).update(
  parameters=[
    dict(
      name="log10_learning_rate",
      prior=dict(
        name="beta",
        shape_a=8,
        shape_b=2
        )
      ),
    dict(
      name="max_depth"
      ),
    dict(
      name="colsample_bytree"
      )
    ]
  )

Removing Prior Beliefs

The prior beliefs can be removed during the progress of an experiment. This means that we default to the belief that the parameter is uniformly distributed. You can simply set the prior field to None. The example given below shows how to remove the prior belief on the colsample_bytree parameter.

experiment = conn.experiments(experiment.id).update(
  parameters=[
    dict(
      name="log10_learning_rate"
      ),
    dict(
      name="max_depth"
      ),
    dict(
      name="colsample_bytree",
      prior=None
      )
    ]
  )

Limitations

Have questions? Contact us to let us know.