Welcome to the developer documentation for SigOpt. If you have a question you can’t answer, feel free to contact us!
This feature is currently in alpha. Please contact us if you would like more information.

Dockerfile: Define your Environment

SigOpt Orchestrate needs to know how to set up your model's environment in order to run your model. SigOpt Orchestrate will create a Docker container with your specified environment requirements. You can read more about the Dockerfile in Docker's official docs.

You can use a Dockerfile you've already created, or SigOpt Orchestrate will auto-generate a Dockerfile template for you if you run the following:

orchestrate init

Example Dockerfile:

FROM orchestrate/python-3.9:0.9.3

COPY requirements.txt /orchestrate/requirements.txt
RUN pip install --no-cache-dir --user -r /orchestrate/requirements.txt

ENV SIGOPT_PROJECT="sigopt-examples"

COPY . /orchestrate
WORKDIR /orchestrate

Enabling GPU access Back to Top

To enable GPU access for your workflows, you will have to specify your CUDA and modeling framework installation in your Dockerfile. This page has Dockerfiles for CUDA versions that you can modify as needed.

Here's an example Dockerfile (adapted from this original) for enabling GPU access when running SigOpt Orchestrate — it uses CUDA 11.1.1 with Tensorflow 2.4.1:

FROM nvidia/cuda:11.1.1-cudnn8-runtime

USER root

RUN set -ex \
; apt-get update -yqq \
; apt-get install -yqq git python3 python3-pip \
; rm -rf /var/lib/apt/lists/* \
; :

RUN pip3 install --no-cache-dir --upgrade pip
RUN pip3 install --no-cache-dir tensorflow-gpu==2.4.1 numpy
RUN pip3 install --no-cache-dir sigopt

RUN ln -s /usr/local/cuda/lib64/ /usr/local/cuda/lib64/
ENV LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/local/cuda/lib64

COPY venv_requirements.txt /orchestrate/venv_requirements.txt
RUN pip3 install -r /orchestrate/venv_requirements.txt  RUN useradd orchestrate

USER orchestrate
COPY . /orchestrate
WORKDIR /orchestrate