Observations report the value of your metric evaluated on a given set of assignments. Sometimes, though, a set of Assignments is not feasible for evaluation. We call this case a metric failure since the model failed to obtain a metric. For example, a machine runs out of memory mid-training because the neural network architecture was too large, a chemical mixture fails to stabilize and is not measurable, or the Assignments are simply not in the domain of the function you're trying to optimize.
If an unfeasible region of the parameter space is known beforehand, it may be possible to predefine with Constraints. In situations in which feasibility is probabilistic or hard to define, SigOpt includes the capability to learn to avoid metric failures.
If SigOpt makes a Suggestion that is not feasible, you can report a failed Observation, which tells us that this Suggestion led to a metric failure. As you report more of these failed Observations, our internal optimization algorithms will figure out the feasible region and only recommend points there, optimizing your Experiment within this restricted, non-rectangular domain.
Note that a failed Observation should be reported only if obtaining an evaluation metric was not possible because of the Assignments themselves. If a certain parameter configuration for a convolutional neural network led to a Python out-of-memory error because the filter size and number of layers interacted in a certain way to make the network architecture too large, it is appropriate to report a failed Observation. If model training abruptly stops because a machine randomly fails, it would not be appropriate to report a failed Observation. In that case, we recommend deleting the Suggestion.
Reporting Failed Observations
Reporting failed Observations is as simple as setting a flag in the Observation Create call.
Note: The complexity of failures and the tightness of your Parameter Bounds impact the speed at which SigOpt will learn to avoid failures. We recommend slightly increasing observation budget for experiments with a non-trivial number of failed Observations.