Created
June 2, 2021 04:49
-
-
Save usametov/745855b665186c2f6baab2201cf4b955 to your computer and use it in GitHub Desktop.
C4W3_Colab_BERT_Loss_Model.ipynb
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| { | |
| "nbformat": 4, | |
| "nbformat_minor": 0, | |
| "metadata": { | |
| "accelerator": "GPU", | |
| "colab": { | |
| "name": "C4W3_Colab_BERT_Loss_Model.ipynb", | |
| "provenance": [], | |
| "collapsed_sections": [], | |
| "toc_visible": true, | |
| "include_colab_link": true | |
| }, | |
| "kernelspec": { | |
| "name": "python3", | |
| "display_name": "Python 3" | |
| } | |
| }, | |
| "cells": [ | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "view-in-github", | |
| "colab_type": "text" | |
| }, | |
| "source": [ | |
| "<a href=\"https://colab.research.google.com/gist/usametov/745855b665186c2f6baab2201cf4b955/c4w3_colab_bert_loss_model.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "7yuytuIllsv1" | |
| }, | |
| "source": [ | |
| "# Assignment 3, Part 1: BERT Loss Model \n", | |
| "\n", | |
| "Welcome to the part 1 of testing the models for this week's assignment. We will perform decoding using the BERT Loss model. In this notebook we'll use an input, mask (hide) random word(s) in it and see how well we get the \"Target\" answer(s). \n", | |
| "\n", | |
| "## IMPORTANT\n", | |
| "\n", | |
| "- As you cannot save the changes you make to this colab, you have to make a copy of this notebook in your own drive and run that. You can do so by going to `File -> Save a copy in Drive`. Close this colab and open the copy which you have made in your own drive.\n", | |
| "\n", | |
| "- Go to this [google drive folder](https://drive.google.com/drive/folders/1rOZsbEzcpMRVvgrRULRh1JPFpkIG_JOz?usp=sharing) named `NLP C4 W3 Data`. In the folder, next to its name use the drop down menu to select `\"Add shortcut to Drive\" -> \"My Drive\" and then press ADD SHORTCUT`. This should add a shortcut to the folder `NLP C4 W3 Data` within your own google drive. Please make sure this happens, as you'll be reading the data for this notebook from this folder.\n", | |
| "\n", | |
| "- Make sure your runtime is GPU (_not_ CPU or TPU). And if it is an option, make sure you are using _Python 3_. You can select these settings by going to `Runtime -> Change runtime type -> Select the above mentioned settings and then press SAVE`" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "Db6LQW5cMSgx" | |
| }, | |
| "source": [ | |
| "**Note: Restarting the runtime maybe required**.\n", | |
| "\n", | |
| "Colab will tell you if the restarting is necessary -- you can do this from the:\n", | |
| "\n", | |
| "Runtime > Restart Runtime\n", | |
| "\n", | |
| "option in the dropdown." | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "qibASJKuDxpm" | |
| }, | |
| "source": [ | |
| "## Outline\n", | |
| "\n", | |
| "- [Part 0: Downloading and loading dependencies](#0)\n", | |
| "- [Part 1: Mounting your drive for data accessibility](#1)\n", | |
| "- [Part 2: Getting things ready](#2)\n", | |
| "- [Part 3: Part 3: BERT Loss](#3)\n", | |
| " - [3.1 Decoding](#3.1)" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "ysxogfC1M158" | |
| }, | |
| "source": [ | |
| "<a name='0'></a>\n", | |
| "# Part 0: Downloading and loading dependencies\n", | |
| "\n", | |
| "Uncomment the code cell below and run it to download some dependencies that you will need. You need to download them once every time you open the colab. You can ignore the `kfac` error." | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "metadata": { | |
| "id": "1BNZzCg0xv3R" | |
| }, | |
| "source": [ | |
| "#!pip -q install trax" | |
| ], | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "metadata": { | |
| "id": "uDhi6qLQMHzs" | |
| }, | |
| "source": [ | |
| "import pickle\n", | |
| "import string\n", | |
| "import ast\n", | |
| "import numpy as np\n", | |
| "import trax \n", | |
| "from trax.supervised import decoding\n", | |
| "import textwrap \n", | |
| "# Will come handy later.\n", | |
| "wrapper = textwrap.TextWrapper(width=70)" | |
| ], | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "Cwr7LoXwQUW5" | |
| }, | |
| "source": [ | |
| "<a name='1'></a>\n", | |
| "# Part 1: Mounting your drive for data accessibility\n", | |
| "\n", | |
| "Run the code cell below and follow the instructions to mount your drive. The data is the same as used in the coursera version of the assignment." | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "metadata": { | |
| "id": "P7ZF7KiXzQEg" | |
| }, | |
| "source": [ | |
| "from google.colab import drive\n", | |
| "drive.mount('/content/drive/', force_remount=True)" | |
| ], | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "otO5C5VwEJoU" | |
| }, | |
| "source": [ | |
| "<a name='2'></a>\n", | |
| "# Part 2: Getting things ready \n", | |
| "\n", | |
| "Run the code cell below to ready some functions which will later help us in decoding. The code and the functions are the same as the ones you previsouly ran on the coursera version of the assignment." | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "metadata": { | |
| "id": "dSmcZ_t8HBq6" | |
| }, | |
| "source": [ | |
| "example_jsons = list(map(ast.literal_eval, open(\"/content/drive/My Drive/NLP C4 W3 Data/data.txt\")))\n", | |
| "\n", | |
| "natural_language_texts = [example_json['text'] for example_json in example_jsons]\n", | |
| "\n", | |
| "PAD, EOS, UNK = 0, 1, 2\n", | |
| " \n", | |
| "def detokenize(np_array):\n", | |
| " return trax.data.detokenize(\n", | |
| " np_array,\n", | |
| " vocab_type='sentencepiece',\n", | |
| " vocab_file='sentencepiece.model',\n", | |
| " vocab_dir='/content/drive/My Drive/NLP C4 W3 Data/')\n", | |
| " \n", | |
| "def tokenize(s):\n", | |
| " # The trax.data.tokenize function operates on streams,\n", | |
| " # that's why we have to create 1-element stream with iter\n", | |
| " # and later retrieve the result with next.\n", | |
| " return next(trax.data.tokenize(\n", | |
| " iter([s]),\n", | |
| " vocab_type='sentencepiece',\n", | |
| " vocab_file='sentencepiece.model',\n", | |
| " vocab_dir='/content/drive/My Drive/NLP C4 W3 Data/'))\n", | |
| " \n", | |
| "vocab_size = trax.data.vocab_size(\n", | |
| " vocab_type='sentencepiece',\n", | |
| " vocab_file='sentencepiece.model',\n", | |
| " vocab_dir='/content/drive/My Drive/NLP C4 W3 Data/')\n", | |
| "\n", | |
| "def get_sentinels(vocab_size):\n", | |
| " sentinels = {}\n", | |
| "\n", | |
| " for i, char in enumerate(reversed(string.ascii_letters), 1):\n", | |
| "\n", | |
| " decoded_text = detokenize([vocab_size - i]) \n", | |
| " \n", | |
| " # Sentinels, ex: <Z> - <a>\n", | |
| " sentinels[decoded_text] = f'<{char}>'\n", | |
| " \n", | |
| " return sentinels\n", | |
| "\n", | |
| "sentinels = get_sentinels(vocab_size) \n", | |
| "\n", | |
| "\n", | |
| "def pretty_decode(encoded_str_list, sentinels=sentinels):\n", | |
| " # If already a string, just do the replacements.\n", | |
| " if isinstance(encoded_str_list, (str, bytes)):\n", | |
| " for token, char in sentinels.items():\n", | |
| " encoded_str_list = encoded_str_list.replace(token, char)\n", | |
| " return encoded_str_list\n", | |
| " \n", | |
| " # We need to decode and then prettyfy it.\n", | |
| " return pretty_decode(detokenize(encoded_str_list))\n", | |
| "\n", | |
| "\n", | |
| "inputs_targets_pairs = []\n", | |
| "\n", | |
| "# here you are reading already computed input/target pairs from a file\n", | |
| "with open ('/content/drive/My Drive/NLP C4 W3 Data/inputs_targets_pairs_file.txt', 'rb') as fp:\n", | |
| " inputs_targets_pairs = pickle.load(fp) \n", | |
| "\n", | |
| "\n", | |
| "def display_input_target_pairs(inputs_targets_pairs):\n", | |
| " for i, inp_tgt_pair in enumerate(inputs_targets_pairs, 1):\n", | |
| " inps, tgts = inp_tgt_pair\n", | |
| " inps, tgts = pretty_decode(inps), pretty_decode(tgts)\n", | |
| " print(f'[{i}]\\n'\n", | |
| " f'inputs:\\n{wrapper.fill(text=inps)}\\n\\n'\n", | |
| " f'targets:\\n{wrapper.fill(text=tgts)}\\n\\n\\n\\n') " | |
| ], | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "metadata": { | |
| "id": "bHLYA6N7Izzi" | |
| }, | |
| "source": [ | |
| "display_input_target_pairs(inputs_targets_pairs)" | |
| ], | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "qkfUvyjtEu6J" | |
| }, | |
| "source": [ | |
| "<a name='3'></a>\n", | |
| "# Part 3: BERT Loss\n", | |
| "\n", | |
| "We will not train the encoder which you have built in the assignment (coursera version). Training it could easily cost you a few days depending on which GPUs/TPUs you are using. Very few people train the full transformer from scratch. Instead, what the majority of people do, they load in a pretrained model, and they fine tune it on a specific task. That is exactly what you are about to do. Let's start by initializing and then loading in the model. \n", | |
| "\n", | |
| "Initialize the model from the saved checkpoint." | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "metadata": { | |
| "id": "Hsqi-dzzxv4e" | |
| }, | |
| "source": [ | |
| "# Initializing the model\n", | |
| "model = trax.models.Transformer(\n", | |
| " d_ff = 4096,\n", | |
| " d_model = 1024,\n", | |
| " max_len = 2048,\n", | |
| " n_heads = 16,\n", | |
| " dropout = 0.1,\n", | |
| " input_vocab_size = 32000,\n", | |
| " n_encoder_layers = 24,\n", | |
| " n_decoder_layers = 24,\n", | |
| " mode='predict') # Change to 'eval' for slow decoding." | |
| ], | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "metadata": { | |
| "id": "lOB1C131xv4i" | |
| }, | |
| "source": [ | |
| "# Now load in the model\n", | |
| "# this takes about 1 minute\n", | |
| "shape11 = trax.shapes.ShapeDtype((1, 1), dtype=np.int32) # Needed in predict mode.\n", | |
| "model.init_from_file('/content/drive/My Drive/NLP C4 W3 Data/models/model.pkl.gz',\n", | |
| " weights_only=True, input_signature=(shape11, shape11))" | |
| ], | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "metadata": { | |
| "id": "9Wy3pr4ZfzA_" | |
| }, | |
| "source": [ | |
| "# Uncomment to see the transformer's structure.\n", | |
| "# print(model)" | |
| ], | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "HuTyft5EBQK6" | |
| }, | |
| "source": [ | |
| "<a name='3.1'></a>\n", | |
| "### 3.1 Decoding\n", | |
| "\n", | |
| "Now you will use one of the `inputs_targets_pairs` for input and as target. Next you will use the `pretty_decode` to output the input and target. The code to perform all of this has been provided below." | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "metadata": { | |
| "id": "LJ8s_xZ1QtkI" | |
| }, | |
| "source": [ | |
| "# # using the 3rd example\n", | |
| "# c4_input = inputs_targets_pairs[2][0]\n", | |
| "# c4_target = inputs_targets_pairs[2][1]\n", | |
| "\n", | |
| "# using the 1st example\n", | |
| "c4_input = inputs_targets_pairs[0][0]\n", | |
| "c4_target = inputs_targets_pairs[0][1]\n", | |
| "\n", | |
| "print('pretty_decoded input: \\n\\n', pretty_decode(c4_input))\n", | |
| "print('\\npretty_decoded target: \\n\\n', pretty_decode(c4_target))\n", | |
| "print('\\nc4_input:\\n\\n', c4_input)\n", | |
| "print('\\nc4_target:\\n\\n', c4_target)\n", | |
| "print(len(c4_target))\n", | |
| "print(len(pretty_decode(c4_target)))" | |
| ], | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "OD9EchfPFlAf" | |
| }, | |
| "source": [ | |
| "Run the cell below to decode" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "metadata": { | |
| "id": "6HwIdimiN0k2" | |
| }, | |
| "source": [ | |
| "# Faster decoding: (still - maybe lower max_length to 20 for speed)\n", | |
| "# Temperature is a parameter for sampling.\n", | |
| "# # * 0.0: same as argmax, always pick the most probable token\n", | |
| "# # * 1.0: sampling from the distribution (can sometimes say random things)\n", | |
| "# # * values inbetween can trade off diversity and quality, try it out!\n", | |
| "output = decoding.autoregressive_sample(model, inputs=np.array(c4_input)[None, :],\n", | |
| " temperature=0.0, max_length=50)\n", | |
| "print(wrapper.fill(pretty_decode(output[0])))" | |
| ], | |
| "execution_count": null, | |
| "outputs": [] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "id": "0n1MG7zNJZdh" | |
| }, | |
| "source": [ | |
| "### Note: As you can see the RAM is almost full, it is because the model and the decoding is memory heavy. Running it the second time might give you an answer that makes no sense, or repetitive words. If that happens restart the runtime (see how to at the start of the notebook) and run all the cells again." | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "metadata": { | |
| "id": "7PRbChEc8dqe" | |
| }, | |
| "source": [ | |
| "" | |
| ], | |
| "execution_count": null, | |
| "outputs": [] | |
| } | |
| ] | |
| } |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment