Last active
December 22, 2021 00:46
-
-
Save ELC/756040fe84a8bb3d14c59b0e997c84e9 to your computer and use it in GitHub Desktop.
Deep Learning with Free GPU (FastAI + Google Colab).ipynb
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
{ | |
"nbformat": 4, | |
"nbformat_minor": 0, | |
"metadata": { | |
"colab": { | |
"name": "Google_Colab_for_Fastai_General_Template4.ipynb", | |
"version": "0.3.2", | |
"provenance": [], | |
"collapsed_sections": [], | |
"include_colab_link": true | |
}, | |
"kernelspec": { | |
"name": "python3", | |
"display_name": "Python 3" | |
}, | |
"accelerator": "GPU" | |
}, | |
"cells": [ | |
{ | |
"cell_type": "markdown", | |
"metadata": { | |
"id": "view-in-github", | |
"colab_type": "text" | |
}, | |
"source": [ | |
"[View in Colaboratory](https://colab.research.google.com/gist/ELC/35db433bec8401e886e227d50aa448e3/google_colab_for_fastai_general_template4.ipynb)" | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "PjOMeCoHHlzQ", | |
"colab_type": "text" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"# Google Colab for Fast.ai Course Template\n", | |
"\n", | |
"Remember to enable the GPU! ***Edit > Notebook settings > set \"Hardware Accelerator\" to GPU.***\n", | |
"\n", | |
"Check [the source]() of this template for updates\n" | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "ArPdbxB-vl9Y", | |
"colab_type": "text" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"## Installing dependencies ##\n", | |
"We need to manually install fastai and pytorch. And maybe other things that fastai depends on (see [here](https://github.com/fastai/fastai/blob/master/requirements.txt)).\n", | |
"\n", | |
"I will be referring to [this fastai forum thread](http://forums.fast.ai/t/colaboratory-and-fastai/10122/6) and [this blogpost](https://towardsdatascience.com/fast-ai-lesson-1-on-google-colab-free-gpu-d2af89f53604) if I get stuck. This is also a handy resource for using pytorch in colab: https://jovianlin.io/pytorch-with-gpu-in-google-colab/ (and his [example notebook](https://colab.research.google.com/drive/1jxUPzMsAkBboHMQtGyfv5M5c7hU8Ss2c#scrollTo=ed-8FUn2GqQ4)!). And this [post](https://medium.com/@chsafouane/getting-started-with-pytorch-on-google-colab-811c59a656b6). **Be careful with python and python3 being the same in this notebook, also there is no difference between pip and pip3**" | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "SY72s-PAwUio", | |
"colab_type": "code", | |
"colab": { | |
"base_uri": "https://localhost:8080/", | |
"height": 84 | |
}, | |
"outputId": "ae0b6d52-8aa0-4a9d-baf5-78614a7aeb1f" | |
}, | |
"cell_type": "code", | |
"source": [ | |
"!python3 -V\n", | |
"!python -V\n", | |
"!pip -V\n", | |
"!pip3 -V" | |
], | |
"execution_count": 1, | |
"outputs": [ | |
{ | |
"output_type": "stream", | |
"text": [ | |
"Python 3.6.6\n", | |
"Python 3.6.6\n", | |
"pip 18.0 from /usr/local/lib/python3.6/dist-packages/pip (python 3.6)\n", | |
"pip 18.0 from /usr/local/lib/python3.6/dist-packages/pip (python 3.6)\n" | |
], | |
"name": "stdout" | |
} | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "HJoT6vSgGdAe", | |
"colab_type": "text" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"**Installing fastai (1.x) from PyPI and installing PyTorch 1.x with CUDA 9.2** \n" | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "av1b-3YWBbT2", | |
"colab_type": "code", | |
"colab": {} | |
}, | |
"cell_type": "code", | |
"source": [ | |
"!pip install torch_nightly -f https://download.pytorch.org/whl/nightly/cu92/torch_nightly.html\n", | |
"!pip install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple/ torchvision==0.2.1.post1\n", | |
"!pip install fastai" | |
], | |
"execution_count": 0, | |
"outputs": [] | |
}, | |
{ | |
"metadata": { | |
"id": "TBT_tbpj-7hZ", | |
"colab_type": "text" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"**Installing LEGACY fastai (0.7) from source and installing PyTorch 0.3.1 with CUDA 9.1** \n", | |
"\n", | |
"Installing from pypi is not recommended as mentioned in [fastai-github-readme](https://github.com/fastai/fastai) (due to it's rapid changes and lack of tests) and you don't want to use conda on Google Colab. So here are few steps to install the library from source." | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "qECKi529HtXm", | |
"colab_type": "code", | |
"colab": { | |
"base_uri": "https://localhost:8080/", | |
"height": 84 | |
}, | |
"outputId": "7a406fa5-05ba-45b9-cba3-13ccdc9bf203" | |
}, | |
"cell_type": "code", | |
"source": [ | |
"%%bash\n", | |
"\n", | |
"if ! [ -d fastai ]\n", | |
"then\n", | |
" git clone https://github.com/fastai/fastai.git\n", | |
"fi\n", | |
"\n", | |
"cd fastai\n", | |
"\n", | |
"git pull\n", | |
"\n", | |
"cd old\n", | |
"\n", | |
"pip -q install . && echo Successfully Installed Fastai 0.7\n", | |
"\n", | |
"pip -q install http://download.pytorch.org/whl/cu91/torch-0.3.1-cp36-cp36m-linux_x86_64.whl && echo Successfully Installed PyTorch\n", | |
"\n", | |
"pip -q install torchvision && echo Successfully Installed TorchVision" | |
], | |
"execution_count": 9, | |
"outputs": [ | |
{ | |
"output_type": "stream", | |
"text": [ | |
"Already up to date.\n", | |
"Successfully Installed Fastai 0.7\n", | |
"Successfully Installed PyTorch\n", | |
"Successfully Installed TorchVision\n" | |
], | |
"name": "stdout" | |
} | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "sIIDTp5G1Hs2", | |
"colab_type": "text" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"**Import all the libraries**" | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "XB3543WIHN0h", | |
"colab_type": "text" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"Imports for FastAI 1.x" | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "x2kfLCuPHM4b", | |
"colab_type": "code", | |
"colab": {} | |
}, | |
"cell_type": "code", | |
"source": [ | |
"from fastai.imports import *" | |
], | |
"execution_count": 0, | |
"outputs": [] | |
}, | |
{ | |
"metadata": { | |
"id": "ja8LBm3DZ6vZ", | |
"colab_type": "text" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"Imports for FastAI Legacy" | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "akD5dZfY1Fx8", | |
"colab_type": "code", | |
"colab": {} | |
}, | |
"cell_type": "code", | |
"source": [ | |
"# This file contains all the main external libs we'll use\n", | |
"from fastai.imports import *\n", | |
"from fastai.transforms import *\n", | |
"from fastai.conv_learner import *\n", | |
"from fastai.model import *\n", | |
"from fastai.dataset import *\n", | |
"from fastai.sgdr import *\n", | |
"from fastai.plots import *" | |
], | |
"execution_count": 0, | |
"outputs": [] | |
}, | |
{ | |
"metadata": { | |
"id": "MgvJGuuJs_tL", | |
"colab_type": "text" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"## GPU Check ##\n", | |
"\n", | |
"Check whether the GPU is enabled" | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "zt_ux_PqxL2N", | |
"colab_type": "code", | |
"colab": { | |
"base_uri": "https://localhost:8080/", | |
"height": 34 | |
}, | |
"outputId": "f207fa8c-4fa9-4f99-de97-af00e6a02a6e" | |
}, | |
"cell_type": "code", | |
"source": [ | |
"f'Is CUDA and CUDNN enabled: {torch.cuda.is_available()} and {torch.backends.cudnn.enabled}'" | |
], | |
"execution_count": 5, | |
"outputs": [ | |
{ | |
"output_type": "execute_result", | |
"data": { | |
"text/plain": [ | |
"'Is CUDA and CUDNN enabled: True and True'" | |
] | |
}, | |
"metadata": { | |
"tags": [] | |
}, | |
"execution_count": 5 | |
} | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "NrbLtmTPHyl0", | |
"colab_type": "text" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"**Check how much of the GPU is available**\n", | |
"\n", | |
"I'm using the following code from [a stackoverflow thread](https://stackoverflow.com/questions/48750199/google-colaboratory-misleading-information-about-its-gpu-only-5-ram-available\n", | |
") to check what % of the GPU is being utilized right now. 100% is bad; 0% is good (all free for me to use!)." | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "tCHMN-qZs5NJ", | |
"colab_type": "code", | |
"colab": { | |
"base_uri": "https://localhost:8080/", | |
"height": 67 | |
}, | |
"outputId": "e8ac7284-4039-43b2-fd59-0d43ee129998" | |
}, | |
"cell_type": "code", | |
"source": [ | |
"# memory footprint support libraries/code\n", | |
"\n", | |
"!ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi\n", | |
"!pip -q install gputil\n", | |
"!pip -q install psutil\n", | |
"!pip -q install humanize\n", | |
"\n", | |
"import psutil\n", | |
"import humanize\n", | |
"import os\n", | |
"import GPUtil as GPU\n", | |
"\n", | |
"GPUs = GPU.getGPUs()\n", | |
"gpu = GPUs[0]\n", | |
"process = psutil.Process(os.getpid())\n", | |
"\n", | |
"print(f\"Number of GPUs: {len(GPUs)}\")\n", | |
"print(f\"Gen RAM Free: {humanize.naturalsize( psutil.virtual_memory().available )} | Proc size: {humanize.naturalsize( process.memory_info().rss)}\")\n", | |
"print(\"GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB\".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))" | |
], | |
"execution_count": 18, | |
"outputs": [ | |
{ | |
"output_type": "stream", | |
"text": [ | |
"Number of GPUs: 1\n", | |
"Gen RAM Free: 12.8 GB | Proc size: 260.7 MB\n", | |
"GPU RAM Free: 11430MB | Used: 11MB | Util 0% | Total 11441MB\n" | |
], | |
"name": "stdout" | |
} | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "q0WZ3Smd3P6w", | |
"colab_type": "text" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"# Ready to Go!" | |
] | |
} | |
] | |
} |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hello, are you aware this page is broken?