Skip to content

Instantly share code, notes, and snippets.

@jiwnchoi
Created February 23, 2023 05:55
Show Gist options
  • Save jiwnchoi/7f6739144718d3486ac3270c6abd36c6 to your computer and use it in GitHub Desktop.
Save jiwnchoi/7f6739144718d3486ac3270c6abd36c6 to your computer and use it in GitHub Desktop.
Deploy FastAPI+PyTorch API with Docker
# Example for Multi-GPU deploying
services:
device0:
image: image/name
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0']
capabilities: [gpu]
ports:
- "5000:3000"
device1:
image: image/name
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['1']
capabilities: [gpu]
ports:
- "5001:3000"
device2:
image: image/name
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['2']
capabilities: [gpu]
ports:
- "5002:3000"
device3:
image: image/name
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['3']
capabilities: [gpu]
ports:
- "5003:3000"
## First, clone your api repository to current folder. and copy Dockerfile
FROM pytorch/pytorch:latest
## Copying current folder to image
COPY ./ ./
## install git-lfs for downloading Huggingface model and datasets.
RUN apt update -y && apt install git git-lfs -y
RUN git lfs install
## Download model
RUN git clone https://huggingface.co/{Model Name}
## Download Dataset
RUN git clone https://huggingface.co/{Dataset Name}
## Install dependencies
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
## Exposing port
EXPOSE 3000
## Running command
# Example for FastAPI
CMD ["uvicorn", "app:app", "--port=3000", "--host=0.0.0.0"]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment