Optimize Your Model
A key component of the SigOpt Platform is the ability to go from tracking your model with SigOpt Runs to optimizing that very same model with minimal changes to your code.
At a high level, a SigOpt Experiment is a grouping of SigOpt Runs and is defined by user-defined parameter and metric spaces. A SigOpt Experiment has a budget that is used to determine the number of hyperparameter tuning loops to conduct. Each hyperparameter loop produces a SigOpt Run with suggested assignments for each parameter. Different sets of hyperparameter values are suggested by either SigOpt algorithms and/or the user with the goal of finding the optimal set(s) of hyperparameters. Overtime, when using the SigOpt Optimizer, you can expect your model to perform better on your metrics.
Create a SigOpt Experiment
A SigOpt Experiment is created as an object containing all information needed to execute the optimization process. To create a SigOpt Experiment, you will define:
|Experiment Name||Name of the experiment.|
|Experiment Type||Identifier for the type of experiment.|
|Parameter Space||List of objects where each object is defining a single parameter.|
|Metric Space||List of objects where each object is defining a single parameter.|
|Integer defining the minimum number of SigOpt Runs in a given SigOpt Experiment.|
|Parallel Bandwidth||Integer defining the number of machines/nodes there will be running different asynchronous SigOpt Runs at any given time.|
All the above information will be available through the Experiment object created during your experiment creation process as well as being available through the unique experiment page on the web dashboard.
sigopt.create_experiment( name="Keras Model Optimization (Python)", type="offline", parameters=[ dict( name="hidden_layer_size", type="int", bounds=dict( min=32, max=128 ) ), dict( name="activation_fn", type="categorical", categorical_values=[ "relu", "tanh" ] ) ], metrics=[ dict( name="holdout_accuracy", objective="maximize" ) ], parallel_bandwidth=1, budget=30 )