Skip to content

How to build custom model

This tutorial show how to implement a custom neural network architecture or make changes to an existing one.

Pre-requirements

Before you start, please make sure you already have an account in Supervisely and at least one agent deployed on machine with GPU support.

You will also need nvidia-docker. Please check cluster section to learn more.

GitHub repo

We make our source for neural networks (and more!) publicly available at GitHub.

Run this command to clone sources to your computer:

git clone https://github.com/supervisely/supervisely

Docker images

To deal with software requirements and deployment we pack source code of neural network in Docker images. Before you start, you should build one.

Switch to supervisely/nn folder. Here you see every network available. In this tutorial we will use unet_v2 network.

Lets take a look what's inside:

.
├── Dockerfile
└── src
    ├── common.py
    ├── dataset.py
    ├── debug_saver.py
    ├── fast_inference.py
    ├── inference.py
    ├── __init__.py
    ├── legacy_inference.py
    ├── metrics.py
    ├── plot_from_logs.py
    ├── schemas.json
    ├── servicer.py
    ├── train.py
    └── unet.py

src folder contains source codes. Put here python code or other files.

Dockerfile contains commands necessary to install pre-requirements and instructions to seal source codes.

Here are contents of Dockerfile:

FROM supervisely/nn-base

# pytorch
RUN conda install -y -c soumith \
        magma-cuda90=2.3.0 \
    && conda install -y -c pytorch \
        pytorch=0.3.1 \
        torchvision=0.2.0 \
        cuda90=1.0 \
    && conda clean --all --yes

# sources
ENV PYTHONPATH /workdir:$PYTHONPATH
WORKDIR /workdir/src

ARG SOURCE_PATH
ARG RUN_SCRIPT
ENV RUN_SCRIPT=$RUN_SCRIPT

COPY supervisely_lib /workdir/supervisely_lib
COPY $SOURCE_PATH /workdir/src

ENTRYPOINT ["sh", "-c", "python -u ${RUN_SCRIPT}"]

So, what's happening here? First, we start from already prepared image with stuff like CUDA, tensorflow and other libs. We usually start from this particular image, but, for some architectures, we may need another version of CUDA or other linux distributive - in this case you can start from scratch.

Next, for this particular model we need some additional libraries. In this case it's pytorch.

Finally, we copy source codes and setup some build-time arguments. As may note, there are several arguments:

  • SOURCE_PATH - source code path. Usually we pass here something like "nn/unet_v2/src"
  • RUN_SCRIPT - what python file to run

The last one requires some additional explanation. We support three modes for each architecture:

  • Train - to train a new model from scratch or with the help of transfer learning
  • Inference - to inference an existing model on images
  • Deploy - to use as deployed API

Basically, it's the same docker image, but built with different RUN_SCRIPT. For example, to build "inference" version we may set RUN_SCRIPT to inference.py file with corresponding code.

Now, let's build a fresh docker image with UNet.

Here we have a little help script build-docker-image.sh. What it does is just runs docker build command with respect to repo folder structure. For the case of unet_v2, it will like the following:

./build-docker-image.sh unet_v2 inference

First argument is folder with corresponding network, the second is entrypoint. It will build an image and tag it as ${NETWORK}-${ENTRYPOINT} - in this case, unet_v2-inference.

Connect image to a Supervisely platform

Push an image to a Docker Registry. You can use Docker Hub or your own private register. Login to in using docker login command, tag it and push, like this:

docker login ...
docker tag unet_v2-inference myname/unet_v2-inference
docker push myname/unet_v2-inference

Now open Neural networks -> Architectures page and click on "Create button". Put here a title and docker image to a "Inference docker image" field. Please do not forget to mention tag (for example, :latest).

New architecture has been created. Next, let's add a new model. Go to Neural networks -> Import. Choose here your newly created architecture. You will also need to attach a weights as a .tar archive. Those files will be provided at runtime in local folder.

In this case i suggest to deploy our UNet model from Model Zoo and download it's weights by clicking on "three dots" at Models page and selecting "Download option".

Last thing, if you have pushed you images to a private docker register, do not forget to re-deploy your agent and provide the credentials under "Advanced settings".

That's it! Not, if you click the "Test" button, agent on your computer will pull your image and execute RUN_SCRIPT.

Local development

Of course, it's impossible to build and push a new image every time you change a single line. It that case we don't even need the agent - you can run docker image locally!

We created another helper run-as-developer.sh. It will build and run your image and also share folder with source codes so that every file change will be immediately applied inside the container.

We also share some specific environment variables and files like $DISPLAY - so you can even run your favorite IDE for debugging.

Also, the input images and model weights (see above) must be provided in /sly_task_data. For that purposes we share nn/unet_v2/data folder when you run run-as-developer.sh. For an example input, you can look in tasks folder of your agent directory here: ~/.supervisely-agent/:token/tasks/:any-id.