Archived Documentation

Welcome to the developer documentation for SigOpt. If you have a question you can’t answer, feel free to contact us!
You are currently viewing archived SigOpt documentation. The newest documentation can be found here.

Metric Failures

Observations report the value of your metric evaluated on a given set of assignments. Sometimes, though, a set of Assignments is not feasible for evaluation. We call this case a metric failure since the model failed to obtain a metric. For example, a machine runs out of memory mid-training because the neural network architecture was too large, a chemical mixture fails to stabilize and is not measurable, or the Assignments are simply not in the domain of the function you're trying to optimize.

If an infeasible region of the parameter space is known beforehand, it may be possible to predefine with Constraints. In situations in which feasibility is defined through thresholding on auxiliary non-optimized metric values, it may be more beneficial to use Metric Constraints.

If SigOpt makes a Suggestion that is not feasible, you can report a failed Observation, which tells us that this Suggestion led to a metric failure. As you report more of these failed Observations, our internal optimization algorithms will figure out the feasible region and only recommend points there, optimizing your Experiment within this restricted, non-rectangular domain.

Note that a failed Observation should be reported only if obtaining an evaluation metric was not possible because of the Assignments themselves. If a certain parameter configuration for a convolutional neural network led to a Python out-of-memory error because the filter size and number of layers interacted in a certain way to make the network architecture too large, it is appropriate to report a failed Observation. If model training abruptly stops because a machine randomly fails, it would not be appropriate to report a failed Observation. In that case, we recommend deleting the Suggestion.

Reporting Failed Observations

Reporting failed Observations is as simple as setting a flag in the Observation Create call.

from sigopt import Connection

conn = Connection(client_token="SIGOPT_API_TOKEN")
observation = conn.experiments(EXPERIMENT_ID).observations().create(

Note: The complexity of failures and the tightness of your Parameter Bounds impact the speed at which SigOpt will learn to avoid failures. We recommend slightly increasing observation budget for experiments with a non-trivial number of failed Observations.