- Clone Transformers4rec from GitHub
git clone https://github.com/NVIDIA-Merlin/Transformers4Rec.git
2. Get data files and folders from drive (https://drive.google.com/drive/u/0/folders/1nTuG6UHWOEaZnBJj7YSIVvnphE1zGc1h) and copy the directory within Transformers4rec directory and mount to the container.
- Pull merlin docker image and link it to Transformers4Rec volume
docker run -it --gpus device=3 --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p 8000:8000 -p 8001:8001 -p 8002:8002 -p 8889:8888 -v $PATH_TO_T4REC:/workspace/Transformers4Rec/ -v $PATH_TO_data:/workspace/data/ nvcr.io/nvstaging/merlin/merlin_ci_runner:latest
4. Pull NVTabular inference PR and update with latest change of the main
cd /nvtabular
git pull origin main
pip install -e .
5. Install Transformers4rec :
cd /workspace/Transformers4Rec/
pip install -e .
pip install torchmetrics==0.3.2
6. Install jupyterlab and dependencies + screen
pip install jupyterlab
pip install ipywidget
apt-get install screen
7. Launch Jupyter server using screen and detach it
screen jupyter-lab --allow-root --ip='0.0.0.0' --NotebookApp.token=''
Press Ctrl+A+D to detach Jupyter lab and get back to terminal
8. Browse to localhost:8889
in your local machine to launch jupyterlab and test the notebooks
**Note: For the demo notebook, once you have trained and saved the model using NVTabular function export_pytorch_ensemble
, you will need to start the triton server in terminal using the following command:
tritonserver --model-repository=<path to models> ----model-control-mode=explicit