# Documentation

Welcome to the developer documentation for SigOpt. If you have a question you can’t answer, feel free to contact us!
Welcome to the new SigOpt docs! If you're looking for the classic SigOpt documentation then you can find that here. Otherwise, happy optimizing!

# Multimetric Optimization

In many applications, it may be necessary to consider multiple competing metrics which have optimal values for different parameters. SigOpt enables this through Multimetric Experiments.

## How does Multimetric Optimization work?

The contour plots below depict two competing metrics, where parameter values of `x1` and `x2` cannot simultaneously meet the optima for both metrics `f1` and `f2`. For example, it can be extremely challenging to maximize both model performance and minimize training time. SigOpt’s Multimetric Optimization tries to solve this problem by finding sets of feasible or optimal values. The result of a Multimetric Experiment is a Pareto Frontier.

## The Pareto Frontier

Below is an example of possible outcomes of a Multimetric Experiment involving the metrics from the contour plots above.

The graph on the left is the resulting feasible region and Pareto Frontier. In this figure, the blue circles are the most efficient points, the red circles represent points not falling on the optimal frontier, and the black dots illustrate the outline of the region. The blue circles create the optimal set of results, where one metric cannot be improved without another metric suffering. Each of the blue points in the left graph corresponds to an efficient parameter choice represented by a blue point in the right graph.

## Interpreting the Solution

A domain expert may be able to judge which of the results in the left graph is preferred, and then use the parameters from the right graph (also accessed through `get_best_runs()`) to create the desired model in production. For further reading, check out our blog posts here and here.

## Defining a Multimetric Function in SigOpt

A SigOpt Multimetric Experiment can be conducted to explore the optimal values achievable for both metrics. We define these metrics using the python client and/or the .yml-format in the code block below, along with the associated experiment metadata which will be used to define the SigOpt Experiment. Notice that, unlike for single metric functions, this multimetric function returns a set of optimal values which contain the name of the metric and the associated value for that metric.

## Multimetric Code Example

```sigopt.create_experiment(
name="Multimetric optimization",
project="sigopt-examples",
type="offline",
parameters=[
dict(
name="hidden_layer_size",
type="int",
bounds=dict(
min=32,
max=128
)
),
dict(
name="activation_fn",
type="categorical",
categorical_values=[
"relu",
"tanh"
]
)
],
metrics=[
dict(
name="holdout_accuracy",
strategy="optimize",
objective="maximize"
),
dict(
name="inference_time",
strategy="optimize",
objective="minimize"
)
],
parallel_bandwidth=1,
budget=30
)```

SigOpt will find trade-offs in each of your metrics in order to find the efficient frontier. If you want to quantify how much of a trade-off you’re willing to make, check out the Metric Thresholds feature.

## Limitations

• `budget` must be set when a Multimetric Experiment is created
• The maximum number of optimized metrics is 2
• Multisolution Experiments are not permitted