Use metadata to store information alongside SigOpt objects, like training/evaluation time, IP addresses, tags. Metadata is a collection of user-provided key-value pairs that SigOpt stores on your behalf under the
metadata field. Think of metadata as your annotation for a SigOpt object. You can include metadata when you create an object:
from sigopt import Connection conn = Connection(client_token="SIGOPT_API_TOKEN") experiment = conn.experiments().create( name="Sample experiment", parameters=[ dict( name="x", bounds=dict( min=0, max=1 ), type="double" ) ], metadata=dict( ip="127.0.0.1" ) ) print("Created experiment: https://app.sigopt.com/experiment/" + experiment.id)
You can also update an object's metadata later:
The Analysis page in the Experiment Dashboard allows you to plot metadata values on Observations as axes in the Experiment History visualizations. This feature enables visualizing the relationship between metadata values and the metric value, as well as metadata values and other parameter values.
Metadata is a series of up to 100 key/value pairs. Keys can have a maximum length of 100 characters. Values must be non-null and can be numbers or strings, and string values can have a maximum length of 500 characters.
from sigopt import Connection conn = Connection(client_token="SIGOPT_API_TOKEN") observation = conn.experiments(experiment.id).observations().create( assignments=dict( x=0.5 ), metadata=dict( ip="127.0.0.1" ), value=1 ) print("Created experiment: https://app.sigopt.com/experiment/" + experiment.id)
Example Use Cases
Metadata is extremely flexible, and there are many different ways to use it.
- Track information such as error codes and hostnames, which can be helpful in diagnosing issues within distributed systems.
- Store IP information for suggestions while completing parallel evaluations of a model. Examining open suggestions will tell you at a glance which machines are currently tuning models, and can give you a rough time estimate for how long a set of assignments has currently run for.
- Tag experiments that are all tuning the same type of model.
- Annotate observations with timing data. Use the Analysis page to see how long each model took to evaluate versus the value of the metric.
- Store a secondary metric on an observation.