Documentation

Welcome to the developer documentation for SigOpt. If you have a question you can’t answer, feel free to contact us!
Welcome to the new SigOpt docs! If you're looking for the classic SigOpt documentation then you can find that here. Otherwise, happy optimizing!

Metric Constraints

The Metric Constraints feature combines the ideas of Metric Strategy and Metric Thresholds to give you more control over where SigOpt searches for the desirable outcomes. In some cases, you may be interested in the performance of additional guardrail metrics, where the values should satisfy some baseline values. Multimetric Experiments can be rephrased as a Constrained Metric experiment, where you want to optimize a primary metric, subject to the secondary or tertiary metric satisfying certain thresholds.

Example: Optimizing Accuracy with Constraint on Inference Time

Recall our earlier example of the Neural Network shown in the Metric Thresholds documentation. Instead of being interested in the tradeoff between accuracy and inference time, suppose we only want to optimize for the accuracy of the network. However, we are still concerned with the inference time, so we define the inference time as a Constraint Metric with a threshold of 100ms. SigOpt allows you to state limitations at experiment creation to help inform the optimizer of the practical restrictions in your problem.

Defining the Metric Constraints

To assign one of your metrics as a constraint metric, specify the strategy field as constraint in the desired Metric object when creating your experiment. You must specify a threshold for the Constraint Metric. When the objective is set to minimize, runs with constraint metric values lower than or equal to the threshold are considered feasible. When the objective is set to maximize, runs with constraint metric values greater than or equal to the threshold are considered feasible.

Below, we create a new experiment, using the example of optimizing the accuracy of the network subject to inference time constraint.

sigopt.create_experiment(
  name="Single metric optimization with constraint metrics",
  project="sigopt-examples",
  type="offline",
  parameters=[
    dict(
      name="hidden_layer_size",
      type="int",
      bounds=dict(
        min=32,
        max=512
      )
    ),
    dict(
      name="activation_fn",
      type="categorical",
      categorical_values=[
        "relu",
        "tanh"
      ]
    )
  ],
  metrics=[
    dict(
      name="holdout_accuracy",
      strategy="optimize",
      objective="maximize"
    ),
    dict(
      name="inference_time",
      strategy="constraint",
      objective="minimize",
      threshold=0.1
    )
  ],
  parallel_bandwidth=1,
  budget=30
)

Updating Constraints

Metric Constraint allows you to update the threshold on the properties page of an experiment at any time while the experiment is in progress.

Feasibility

If the threshold defined for a Constraint Metric cannot be satisfied anywhere in the domain, this feature will perform unpredictably. For example, if you set the threshold for inference time to 0 (which is not actually possible), this feature will assume 0 ms is actually possible and explore the domain erratically trying to find solutions that satisfy this threshold. When defining constraints, it is best to state values at the start of an experiment which are well-understood based on previous experiments or prior knowledge.

Best Runs

Best Runs List will only consider Runs that satisfy all constraint metric thresholds. Additionally, it is possible that there will be no Run that satisfies all thresholds in the early stages of experiments, and thus no best runs. In this situation, Best Runs List will return no best runs (an empty Pagination), and we recommend waiting until the experiment is further along to retrieve best assignments.

Recommendations

  • Set the experiment budget higher. SigOpt Experiments with Metric Constraints require additional data points to understand the associated feasible parameter space. As a result, we recommend adding 25% to your original budget for each additional constraint metric.
  • Parallelism is another powerful tool for accelerating tuning by testing multiple runs simultaneously. It can be used in conjunction with Metric Constraints, but each constraint metric requires extra time for SigOpt to compute the associated feasible parameter space. We recommend limiting parallelism to no more than 5 simultaneous SigOpt Runs.

Limitations

  • There can be up to 4 constraint metrics for an experiment.
  • Constraint metrics must define threshold values.
  • All specified constraint metrics must be reported for successful SigOpt Runs.
  • No constraint metrics can be reported for Failed SigOpt Runs.
  • budget must be set for an experiment with Metric Constraints.
  • Multi solution experiments are not permitted.