We allow you to download a docker image to run the model on your own infrastructure. This option is only available in paid plans. You can take a look at our pricing plans.  You are only allowed to run one docker image at a time. You will need internet connection to run the docker image.

How to download docker image?
You can download the docker image from the docker integration tab. Once you build a model you can see different options to integrate your model.  Just click on docker integration and you are ready to go!!

Can I activate docker integration offline?
Yes, we have option to integrate docker image offline. If you want to integrate docker image offline, just contact us on our support chat or write an email to  support@nanonets.com with the subject "Offline docker integration"

How do I run docker on my server?

First, you will need to make sure that docker is installed on your server. Below is the link to install docker for multiple platforms:
https://docs.docker.com/install/linux/docker-ce/ubuntu/

Next, in case your server have NVIDIA GPU, you can use GPU acceleration for your docker container. For that, you will need to install a toolkit called nvidia-docker. Below is the link to install nvidia-docker on your server:
https://github.com/NVIDIA/nvidia-docker/wiki/Installation-(Native-GPU-Support)

Next step is to pull the docker image from our docker registry to your server. For that, you need to run following command to login to docker registry:

sudo docker login docker.nanonets.com --username {email} --password {api_key}

You need to replace {email} with the email address of your Nanonets account and {api_key} with the api key of your account which you can get at this link:
https://app.nanonets.com/#/keys

You should see Login Succeeded as output of command on your server.

Now, you can run the docker container on your server with following command:

sudo docker run --runtime=nvidia -d -p 80:8080 docker.nanonets.com/{model_id}:gpu

You need to replace {model_id} with the model you are trying to run with docker


Once the command executes, your docker container should start serving the model on port 80 of your server.

Did this answer your question?