Last active
November 28, 2022 18:21
-
-
Save firmai/2cb32fd9a9adc06ab9f56c897ef90edf to your computer and use it in GitHub Desktop.
Word Embeddings.ipynb
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
{ | |
"nbformat": 4, | |
"nbformat_minor": 0, | |
"metadata": { | |
"colab": { | |
"name": "Word Embeddings.ipynb", | |
"provenance": [], | |
"include_colab_link": true | |
}, | |
"kernelspec": { | |
"display_name": "Python 3", | |
"language": "python", | |
"name": "python3" | |
} | |
}, | |
"cells": [ | |
{ | |
"cell_type": "markdown", | |
"metadata": { | |
"id": "view-in-github", | |
"colab_type": "text" | |
}, | |
"source": [ | |
"<a href=\"https://colab.research.google.com/gist/firmai/2cb32fd9a9adc06ab9f56c897ef90edf/word-embeddings.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>" | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "If6j5zRpoWts" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"# Traditional Embeddings\n", | |
"\n", | |
"Word games:\n", | |
"\n", | |
"+ [Talk to books](https://books.google.com/talktobooks/)\n", | |
"\n", | |
"+ [Semantris](https://research.google.com/semantris)\n", | |
"\n", | |
"Deep Learning algorithms require the input to be represented as (sequences of) fixed-length feature vectors. \n", | |
"\n", | |
"+ Words in documents and other categorical features such as user/product ids in recommeders, names of places, visited URLs, etc. are usually represented by using a one-of-K scheme (**one-hot encoding**). \n", | |
"\n", | |
"+ Phrases are represented by bag-of-words or bag-of-ngrams features, loosing the ordering of words and ignoring semantics. \n", | |
"\n", | |
"Are these good representations for deep learning?\n", | |
"\n", | |
"Let's see how to represent **words**, but this line of reasoning can be extended to other items.\n", | |
"\n", | |
"+ There are an estimated 13 million tokens for the English language. \n", | |
"\n", | |
"+ One possible strategy is to encode word tokens each into some vector that represents a point in some sort of *word space* that *represents* language semantics. \n", | |
"\n", | |
"+ The most intuitive reason is that perhaps there actually exists some $N$-dimensional space (such that $N << 13$ million) that is sufficient to encode all semantics of our language. \n", | |
"\n", | |
"+ Each dimension would encode some meaning that we transfer using speech." | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "xmiMc411oWtv" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"### One-hot encoding\n", | |
"\n", | |
"If we represent every word as an $\\mathbb{R}^{|V|\\times 1}$ vector with all $0$s and one $1$ at the index of that word in the sorted english language, word vectors in this type of encoding would appear as the following:\n", | |
"\n", | |
"<centering>\n", | |
"$$w^{aardvark} = \\left[ \\begin{array}{c} 1 \\\\ 0 \\\\ 0 \\\\ \\vdots \\\\ 0 \\end{array} \\right], w^{a} = \\left[ \\begin{array}{c} 0 \\\\ 1 \\\\ 0 \\\\ \\vdots \\\\ 0 \\end{array} \\right] , w^{at} = \\left[ \\begin{array}{c} 0 \\\\ 0 \\\\ 1 \\\\ \\vdots \\\\ 0 \\end{array} \\right] , \\cdots, w^{zebra} = \\left[ \\begin{array}{c} 0 \\\\ 0 \\\\ 0 \\\\ \\vdots \\\\ 1 \\end{array} \\right] $$\n", | |
"</centering>\n", | |
"\n", | |
"\n", | |
"We represent each word as a completely independent entity:\n", | |
"\n", | |
"$$(w^{hotel})^Tw^{motel} = (w^{hotel})^Tw^{cat} = 0$$\n", | |
"\n", | |
"What other alternatives are there?" | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "7HyeTq90oWtw" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"### Semantics from word-document matrix\n", | |
"\n", | |
"As our first attempt, we make the bold conjecture that words that are related will often appear in the same documents (or phrases, paragraphs, etc.). \n", | |
"\n", | |
"For instance, \"banks\", \"bonds\", \"stocks\", \"money\", etc. are probably likely to appear together. But \"banks\", \"octopus\", \"banana\", and \"hockey\" would probably not consistently appear together. \n", | |
"\n", | |
"We use this fact to build a word-document matrix, $X$ in the following manner: \n", | |
"\n", | |
"+ Loop over billions of documents and for each time word $i$ appears in document $j$, we add one to entry $X_{ij}$. \n", | |
"\n", | |
"This is obviously a very large matrix ($\\mathbb{R}^{|V|\\times M}$) and it scales with the number of documents ($M$). \n", | |
"\n", | |
"So perhaps we can try something better, such as building a window based co-occurrence matrix.\n", | |
"\n", | |
"In this method we count the number of times each word appears inside a window of a particular size around the word of interest (this is a sparse ($\\mathbb{R}^{|V|\\times |V|}$) matrix). We calculate this count for all the words in corpus. \n", | |
"\n", | |
"Let our corpus contain just three sentences and the window size be 1:\n", | |
"\n", | |
"+ ``I enjoy flying``.\n", | |
"+ ``I like NLP``.\n", | |
"+ ``I like deep learning``.\n", | |
"\n", | |
"The resulting counts matrix will then be:\n", | |
"\n", | |
"$$X=\\left[ \\begin{array}{cccccccc}\n", | |
" & I & like & enjoy & deep & learning & NLP & flying & . \\\\\n", | |
" I & 0 & 2 & 1 & 0 & 0 & 0 & 0 & 0\\\\\n", | |
" like & 2 & 0 & 0 & 1 & 0 & 1 & 0 & 0\\\\\n", | |
" enjoy & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\\\\n", | |
" deep & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 \\\\\n", | |
" learning & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1\\\\\n", | |
" NLP & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1\\\\\n", | |
" flying & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1\\\\\n", | |
" . & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 \\\\\n", | |
" \\end{array} \\right]$$\n", | |
"\n", | |
"Once the $|V|\\times|V|$ co-occurrence matrix $X$ has been generated, we can apply SVD on $X$ to get $X = USV^T$ and select the first $k$ columns of $U$ to get a $k$-dimensional word vectors. \n", | |
"\n", | |
"$\\frac{\\sum_{i = 1}^{k}\\sigma_i}{\\sum_{i = 1}^{|V|}\\sigma_i}$ indicates the amount of variance captured by the first $k$ dimensions.\n", | |
"\n", | |
"These vectors encode some kind of semantics but they have some problems:\n", | |
"\n", | |
"+ The dimensions of the matrix can change very often (new words are added very frequently and corpus changes in size).\n", | |
"+ SVD based methods do not scale well for big matrices and it is hard to incorporate new words or documents. \n", | |
"+ The matrix is extremely sparse since most words do not co-occur.\n", | |
"+ The matrix is very high dimensional in general ($\\approx 10^6 \\times 10^6$)\n", | |
"+ Quadratic cost to train (i.e. to perform SVD)\n", | |
"+ Requires the incorporation of some hacks on $X$ to account for the drastic imbalance in word frequency\n", | |
"\n", | |
"Some solutions to exist to resolve some of the issues discussed above:\n", | |
"+ Ignore function words such as \"the\", \"he\", \"has\", etc.\n", | |
"+ Apply a ramp window -- i.e. weight the co-occurrence count based on distance between the words in the document. \n", | |
"+ Use Pearson correlation and set negative counts to 0 instead of using just raw count.\n", | |
"\n", | |
"But a NN method can solve many of these issues in a far more elegant manner....\n", | |
"\n" | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "Mco2vL8LoWtw" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"## ``word2vec``\n", | |
"\n", | |
"Instead of computing and storing global information about some huge dataset (which might be billions of sentences), we can try to create a model that will be able to learn one iteration at a time and eventually be able **to encode the probability of a word given its context** (or, alternatively, the probability of the context given a word). \n", | |
"\n", | |
"> The **context of a word** is the set of $m$ surrounding words. For instance, the $m = 2$ context of the word ``fox`` in the sentence ``The quick brown fox jumped over the lazy dog`` is \\{``quick``, ``brown``, ``jumped``, ``over``\\}.\n", | |
"\n", | |
"The idea is to design a model whose parameters are the word vectors. Then, train the model on a certain objective related to representing the probability model. \n", | |
"\n", | |
"At every iteration we run our model, evaluate the errors, and follow an update rule that has some notion of penalizing the model parameters that caused the error. Thus, we learn our word vectors. \n", | |
"\n", | |
"Mikolov presented a simple, probabilistic model in 2013 that is known as ``word2vec``. In fact, ``word2vec`` includes 2 algorithms (**CBOW** and **skip-gram**) and 2 training methods (negative sampling and hierarchical softmax).\n", | |
"\n", | |
"> This model relies on a very important hypothesis in linguistics, *distributional semantics*. The basic idea of distributional semantics can be summed up in the so-called distributional hypothesis: linguistic items with similar distributions have similar meanings.\n", | |
"\n", | |
"> The distributional hypothesis can be applied to other data than words: items in shopping baskets, neural activations in wet neural networks, etc." | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "Ukv5D1sJoWtx" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"First, we need to create such a model that will assign a probability to a sequence of tokens. Let us start with an example: ``The cat jumped over the puddle``. \n", | |
"\n", | |
"A **good language model will give this sentence a high probability because this is a completely valid sentence**, syntactically and semantically. Similarly, the sentence ``stock boil fish is toy`` should have a very low probability because it makes no sense. \n", | |
"\n", | |
"Mathematically, we can call this probability on any given sequence of $n$ words:\n", | |
"\n", | |
"$$P(w_{1}, w_{2}, \\cdots, w_{n})$$\n", | |
"\n", | |
"### Language Models\n", | |
"\n", | |
"We know that \n", | |
"\n", | |
"$$P(w_{1}, w_{2}, \\cdots, w_{n}) = P(w_{1}) P(w_{2} | w_{1}) \\dots P(w_{n} | w_{1}, w_{2}, \\cdots, w_{n-1})$$\n", | |
"\n", | |
"but we alse know that we cannot compute this terms from a corpus by **counting**. All we can do if to approximate it.\n", | |
"\n", | |
"We can take the **unary language model** approach and break apart this probability by assuming the word occurrences are completely independent:\n", | |
"\n", | |
"$$P(w_{1}, w_{2}, \\cdots, w_{n}) \\approx \\prod_{i=1}^n P(w_{i})$$\n" | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "PLsBYvWhIt67" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"However, we know the next word is highly contingent upon the previous sequence of words. So perhaps we let the probability of the sequence depend on the pairwise probability of a word in the sequence and the word next to it. We call this the **bigram model** and represent it as:\n", | |
"\n", | |
"$$P(w_{1}, w_{2}, \\cdots, w_{n}) \\approx \\prod_{i=2}^n P(w_{i} | w_{i-1})$$\n", | |
"\n", | |
"Again this is certainly a bit naive since we are only concerning ourselves with pairs of neighboring words rather than evaluating a whole sentence, but as we will see, this representation gets us pretty far along. Note in the Word-Word Matrix with a context of size 1, we basically can learn these pairwise probabilities. But again, this would require computing and storing global information about a massive dataset." | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "TzRNwdZ0oWtx" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"### Skip-gram model\n", | |
"\n", | |
"The skip-gram approach is to create a model such that given the center word ``jumped``, the model will be able to predict or generate the surrounding words ``The``, ``cat``, ``over``, ``the``, ``puddle``. Here we call the word ``jumped`` the context. \n", | |
"\n", | |
"How can we learn this model? Well, we need to create an **objective function**. \n", | |
"\n", | |
"Let's suppose we have a text composed of $T$ words. In this case, for each position $t = 1, … , T$, our task is to predict context words within a window of fixed size $m$, given\tcenter word $w_t$:\n", | |
"\n", | |
"$$\n", | |
"L = \\prod^T_{t=1} \\prod_{-m \\leq j \\leq m ; j \\neq 0} P(w_{t+j} | w_t)\n", | |
"$$\n", | |
"\n", | |
"The objective function is the average negative log likelihood:\n", | |
"\n", | |
"$$\n", | |
"J = - \\frac{1}{T} \\sum^T_{t=1} \\sum_{-m \\leq j \\leq m ; j \\neq 0} \\log P(w_{t+j} | w_t)\n", | |
"$$\n", | |
"\n", | |
"How to calculate $P(w_{t+j} | w_t)$?\n", | |
"\n", | |
"**We will use a model that uses two vectors per word**. \n", | |
"\n", | |
"We create two matrices, $\\mathcal{V} \\in \\mathbb{R}^{n\\times|V|}$ and $\\mathcal{U} \\in \\mathbb{R}^{|V|\\times n}$, where $n$ is an arbitrary size which defines the size of our embedding space. \n", | |
"\n", | |
"$\\mathcal{V}$ is the input word matrix such that the $i$-th column of $\\mathcal{V}$ is the $n$-dimensional **embedded vector** for word $w_{i}$ when it is an input to this model. We denote this $n\\times1$ vector as $v_{i}$. \n", | |
"\n", | |
"Similarly, $\\mathcal{U}$ is the output word matrix. The $j$-th row of $\\mathcal{U}$ is an $n$-dimensional embedded vector for word $w_{j}$ when it is an output of the model. We denote this row of $\\mathcal{U}$ as $u_{j}$. \n", | |
"\n", | |
"Then, for a center word $c$ and a context word $o$, we assume the following model:\n", | |
"\n", | |
"$$\n", | |
"P(o | c) = \\frac{\\exp(u_o^T v_c)}{\\sum_{w \\in V} \\exp(u_w^T v_c)}\n", | |
"$$\n", | |
"\n", | |
"where $u_i = \\mathcal{U} w_i$, $v_i = \\mathcal{V} w_i$, and $w_i$ is the one hot encoding of a word $i$. Thus, our objective function is:\n", | |
"\n", | |
"$$\n", | |
"J = - \\frac{1}{T} \\sum^T_{c=1} \\sum_{-m \\leq j \\leq m ; j \\neq 0} \\log \\frac{\\exp(u_{c+j}^T v_c)}{\\sum_{w \\in V} \\exp(u_w^T v_c)} = \\frac{1}{T} \\sum^T_{c=1} \\left(\\sum_{-m \\leq j \\leq m ; j \\neq 0} - (u_{c+j}^T v_c) + 2m \\log \\sum_{w \\in V} \\exp(u_w^T v_c) \\right)\n", | |
"$$\n", | |
"\n", | |
"It is important to note that the second term involves a large number of vector products!\n", | |
"\n", | |
"The model works in 6 steps:\n", | |
"\n", | |
"+ We generate our one hot vector for the input context, $w_c$.\n", | |
"+ We get our embedded word vector from the context $v_c = \\mathcal{V} w_c$.\n", | |
"+ We generate $2m$ score vectors $u_o^T v_c $, where $ u_o = \\mathcal{U} w_o$.\n", | |
"+ Turn each of these scores into probabilities (which involves a large number of vector products). \n", | |
"+ Our objective is to match these $2m$ probability vectors to the one hot vectors of the actual input.\n", | |
"\n", | |
"This is the graphical representation of our model ($ W $ encodes $ \\mathcal{V}$ and $ W'$ encodes $ \\mathcal{U}$). The left part involves only one vector multiplications, but the right one is much heavier!\n", | |
"\n", | |
"<center>\n", | |
"<img src=\"https://github.com/DataScienceUB/DeepLearningMaster2019/blob/master/images/fword2vec-sg.png?raw=1\" alt=\"\" style=\"width: 400px;\"/> \n", | |
"</center>\n", | |
"\n", | |
"The computational complexity of this algorithm computed in a straightforward fashion is the size of our vocabulary, $O(V)$. This is because of the term $\\sum_{w \\in V} \\exp(u_w^T v_c)$. This denominator computes the similarity of all possible contexts $u_w$ and the target word $v_c$. \n", | |
"\n", | |
"### Negative sampling\n", | |
"\n", | |
"Loss functions $ J $ is expensive to compute because of the softmax normalization, where we sum over all $ |V| $ scores! A simple idea is we could instead just approximate it.\n", | |
"\n", | |
"While negative sampling is based on the Skip-Gram model, it is in fact optimizing a different objective. \n", | |
"\n", | |
"Consider a pair $(w, c)$ of word and context. Did this pair come from the training data? Let's denote by $P(D = 1|w, c)$ the probability that (w, c) came from the corpus data. Correspondingly, $P(D = 0|w, c)$ will be the probability that $(w, c)$ did not come from the corpus data. \n", | |
"\n", | |
"First, let's model $P(D = 1|w, c)$ with the sigmoid function:\n", | |
"\n", | |
"$$ P(D = 1|w, c, \\theta) = \\sigma (v_c^T v_w) = \\frac{1}{1+ e^{(-v_c^Tv_w)}}$$\n", | |
"Now, we build a new objective function that tries to maximize the probability of a word and context being in the corpus data if it indeed is, and maximize the probability of a word and context not being in the corpus data if it indeed is not. We take a simple maximum likelihood approach of these two probabilities. (Here we take $\\theta$ to be the parameters of the model, and in our case it is $\\mathcal{V}$ and $\\mathcal{U}$.)\n", | |
"\\begin{align*}\n", | |
"\\theta &= \\mbox{argmax}_{\\theta} \\prod_{(w,c) \\in D} P(D = 1|w, c, \\theta) \\prod_{(w,c) \\in \\tilde{D}} P(D = 0|w, c, \\theta) \\\\\n", | |
"&= \\mbox{argmax}_{\\theta} \\prod_{(w,c) \\in D} P(D = 1|w, c, \\theta) \\prod_{(w,c) \\in \\tilde{D}} (1 - P(D = 1|w, c, \\theta))\\\\\n", | |
"&= \\mbox{argmax}_{\\theta} \\sum_{(w,c) \\in D} \\log P(D = 1|w, c, \\theta) + \\sum_{(w,c) \\in \\tilde{D}} \\log(1 - P(D = 1|w, c, \\theta))\\\\\n", | |
"&= \\mbox{argmax}_{\\theta} \\sum_{(w,c) \\in D} \\log \\frac{1}{1 + \\exp(-u_w^Tv_c)} + \\sum_{(w,c) \\in \\tilde{D}} \\log(1 - \\frac{1}{1 + \\exp(-u_w^Tv_c)} )\\\\\n", | |
"&= \\mbox{argmax}_{\\theta} \\sum_{(w,c) \\in D} \\log \\frac{1}{1 + \\exp(-u_w^Tv_c)} + \\sum_{(w,c) \\in \\tilde{D}} \\log(\\frac{1}{1 + \\exp(u_w^Tv_c)} )\\\\\n", | |
"\\end{align*}\n", | |
"Note that maximizing the likelihood is the same as minimizing the negative log likelihood\n", | |
"\n", | |
"$$\n", | |
"J = - \\sum_{(w,c) \\in D} \\log \\frac{1}{1 + \\exp(-u_w^Tv_c)} - \\sum_{(w,c) \\in \\tilde{D}} \\log(\\frac{1}{1 + \\exp(u_w^Tv_c)} )\n", | |
"$$\n", | |
"\n", | |
"Note that $\\tilde{D}$ is a \"false\" or \"negative\" corpus.\n", | |
"\n", | |
"For skip-gram, our new objective function for observing the context word $ c-m+j$ given the center word $ c $ would be\n", | |
"\n", | |
"\n", | |
"$$ - \\log \\sigma (u_{c-m+j}^{T}\\cdot v_{c}) -\\sum_{k = 1}^K \\log \\sigma (- \\tilde{u}_{k}^{T}\\cdot v_{c}) $$\n", | |
"\n", | |
"\n", | |
"In the above formulation, $\\{\\tilde{u}_{k} | k = 1\\dots K\\}$ are sampled from $P_n(w)$, the unigram distribution. Let's discuss what $P_n(w)$ should be. While there is much discussion of what makes the best approximation, what seems to work best is the Unigram Model raised to the power of 3/4. Why 3/4? Here's an example that might help gain some intuition:\n", | |
"\n", | |
"+ ``is``: $0.9^{3/4} = 0.92$\n", | |
"+ ``Constitution``: $0.09^{3/4} = 0.16$\n", | |
"+ ``bombastic``: $0.01^{3/4} = 0.032$\n", | |
"\n", | |
"``Bombastic`` is now 3x more likely to be sampled while ``is`` only went up marginally.\n" | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "k774UXsooWty" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"### Continuous Bag of Words Model (CBOW)\n", | |
"\n", | |
"The CBOW approach is to treat {``The``, ``cat``, ``over``, ``the``, ``puddle``} as a context and from these words, be able to predict or generate the center word ``jumped``:\n", | |
"\n", | |
"+ We generate our one hot vectors for the input context ($w_j, \\forall j$).\n", | |
"+ We get our word vectors for the input context ($v_j = \\mathcal{V} w_j, \\forall j$).\n", | |
"+ Average these vectors to get a unique vector ($\\hat v = \\frac{1}{2m} \\sum_j v_j$).\n", | |
"+ Get the score vector ($z_i = \\mathcal{U} \\hat v$).\n", | |
"+ Turn the score into probabilities.\n", | |
"+ Our objective is to match this probability vector to the one hot vector of the actual word.\n", | |
"\n", | |
"Our objective function is:\n", | |
"\n", | |
"$$\n", | |
"J = \\frac{1}{T} \\sum^T_{c=1} \\left(- (u_{c}^T \\hat v) + \\log \\sum_{w \\in V} \\exp(u_{w}^T \\hat v) \\right)\n", | |
"$$\n", | |
"\n", | |
"and the graphical representation of the model:\n", | |
"\n", | |
"<center>\n", | |
"<img src=\"https://github.com/DataScienceUB/DeepLearningMaster2019/blob/master/images/word2vec-cbow.png?raw=1\" alt=\"\" style=\"width: 400px;\"/> \n", | |
"</center>" | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "1VT7lOWnoWty" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"## A ``word2vec`` implementation in Keras\n", | |
"\n", | |
"A word embedding layer is usually regarded as a mapping from a discrete set of objects (words) to a real valued vector, i.e. \n", | |
"\n", | |
"$$k\\in\\{1..|V|\\} \\rightarrow \\mathbb{R}^{n}$$\n", | |
"\n", | |
"Thus, we can represent the *Embedding layer* as $|V|\\times n$ matrix, or just a table/dictionary.\n", | |
"\n", | |
"$$\n", | |
"\\begin{matrix}\n", | |
"word_1: \\\\\n", | |
"word_2:\\\\\n", | |
"\\vdots\\\\\n", | |
"word_{|V|}: \\\\\n", | |
"\\end{matrix}\n", | |
"\\left[\n", | |
"\\begin{matrix}\n", | |
"x_{1,1}&x_{1,2}& \\dots &x_{1,n}\\\\\n", | |
"x_{2,1}&x_{2,2}& \\dots &x_{2,n}\\\\\n", | |
"\\vdots&&\\\\\n", | |
"x_{{|V|},1}&x_{{|V|},2}& \\dots &x_{{|V|},n}\\\\\n", | |
"\\end{matrix}\n", | |
"\\right]\n", | |
"$$\n", | |
"\n", | |
"In this sense, the basic operation that an embedding layer has to accomplish is that given a certain word it returns the assigned code. And the goal in learning is to learn the values in the matrix.\n", | |
"\n", | |
"\n", | |
"To train our data set using negative sampling and the skip-gram method, we need to create data samples for both valid context words and for negative samples. \n", | |
"\n", | |
"This involves scanning through the data set and picking target words, then randomly selecting context words from within the window of words around the target word (i.e. if the target word is “on” from “the cat sat on the mat”, with a window size of 2 the words “cat”, “sat”, “the”, “mat” could all be randomly selected as valid context words). \n", | |
"\n", | |
"It also involves randomly selecting negative samples outside of the selected target word context. \n", | |
"\n", | |
"Finally, we also need to set a label of 1 or 0, depending on whether the supplied context word is a true context word or a negative sample. \n", | |
"\n", | |
"Thankfully, Keras has a function (``skipgrams``) which does all that for us." | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "XnViLO1voWtz", | |
"outputId": "02a3e7cc-6aeb-4fc0-859c-53ccab0131a2", | |
"colab": { | |
"base_uri": "https://localhost:8080/" | |
} | |
}, | |
"cell_type": "code", | |
"source": [ | |
"from tensorflow.keras.models import Model\n", | |
"from tensorflow.keras.layers import Input, Dense, Reshape, dot\n", | |
"from tensorflow.keras.layers import Embedding\n", | |
"from tensorflow.keras.preprocessing.sequence import skipgrams\n", | |
"from tensorflow.keras.preprocessing import sequence\n", | |
"\n", | |
"import urllib.request\n", | |
"import collections\n", | |
"import os\n", | |
"import zipfile\n", | |
"\n", | |
"import numpy as np\n", | |
"import tensorflow as tf\n", | |
"\n", | |
"def maybe_download(filename, url, expected_bytes):\n", | |
" \"\"\"Download a file if not present, and make sure it's the right size.\"\"\"\n", | |
" if not os.path.exists(filename):\n", | |
" filename, _ = urllib.request.urlretrieve(url + filename, filename)\n", | |
" statinfo = os.stat(filename)\n", | |
" if statinfo.st_size == expected_bytes:\n", | |
" print('Found and verified', filename)\n", | |
" else:\n", | |
" print(statinfo.st_size)\n", | |
" raise Exception(\n", | |
" 'Failed to verify ' + filename + '. Can you get to it with a browser?')\n", | |
" return filename\n", | |
"\n", | |
"\n", | |
"# Read the data into a list of strings.\n", | |
"def read_data(filename):\n", | |
" \"\"\"Extract the first file enclosed in a zip file as a list of words.\"\"\"\n", | |
" with zipfile.ZipFile(filename) as f:\n", | |
" data = tf.compat.as_str(f.read(f.namelist()[0])).split()\n", | |
" return data\n", | |
"\n", | |
"\n", | |
"def build_dataset(words, n_words):\n", | |
" \"\"\"Process raw inputs into a dataset.\"\"\"\n", | |
" count = [['UNK', -1]]\n", | |
" count.extend(collections.Counter(words).most_common(n_words - 1))\n", | |
" dictionary = dict()\n", | |
" for word, _ in count:\n", | |
" dictionary[word] = len(dictionary)\n", | |
" data = list()\n", | |
" unk_count = 0\n", | |
" for word in words:\n", | |
" if word in dictionary:\n", | |
" index = dictionary[word]\n", | |
" else:\n", | |
" index = 0 # dictionary['UNK']\n", | |
" unk_count += 1\n", | |
" data.append(index)\n", | |
" count[0][1] = unk_count\n", | |
" reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))\n", | |
" return data, count, dictionary, reversed_dictionary\n", | |
"\n", | |
"def collect_data(vocabulary_size=10000):\n", | |
" url = 'http://mattmahoney.net/dc/'\n", | |
" filename = maybe_download('text8.zip', url, 31344016)\n", | |
" vocabulary = read_data(filename)\n", | |
" print('First words of the dataset:',vocabulary[:7])\n", | |
" data, count, dictionary, reverse_dictionary = build_dataset(vocabulary,\n", | |
" vocabulary_size)\n", | |
" del vocabulary # Hint to reduce memory.\n", | |
" return data, count, dictionary, reverse_dictionary\n", | |
"\n", | |
"vocab_size = 10000\n", | |
"data, count, dictionary, reverse_dictionary = collect_data(vocabulary_size=vocab_size)\n", | |
"print('First words representation:',data[:7])\n" | |
], | |
"execution_count": 1, | |
"outputs": [ | |
{ | |
"output_type": "stream", | |
"name": "stdout", | |
"text": [ | |
"Found and verified text8.zip\n", | |
"First words of the dataset: ['anarchism', 'originated', 'as', 'a', 'term', 'of', 'abuse']\n", | |
"First words representation: [5234, 3081, 12, 6, 195, 2, 3134]\n" | |
] | |
} | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "-oA_-AWu57ud", | |
"outputId": "9c3e1997-1180-4db1-c7c6-b3e09243a8b5", | |
"colab": { | |
"base_uri": "https://localhost:8080/" | |
} | |
}, | |
"cell_type": "code", | |
"source": [ | |
"window_size = 3\n", | |
"vector_dim = 300\n", | |
"epochs = 200000\n", | |
"\n", | |
"valid_size = 16 # Random set of words to evaluate similarity on.\n", | |
"valid_window = 100 \n", | |
"valid_examples = np.random.choice(valid_window, valid_size, replace=False)\n", | |
"\n", | |
"# Generates a word rank-based probabilistic sampling table.\n", | |
"sampling_table = sequence.make_sampling_table(vocab_size)\n", | |
"\n", | |
"# Generates skipgram word pairs.\n", | |
"# This function transforms a sequence of word indexes \n", | |
"# (list of integers) into tuples of words of the form:\n", | |
"# (word, word in the same window), with label 1 (positive samples).\n", | |
"# (word, random word from the vocabulary), with label 0 (negative samples).\n", | |
"couples, labels = skipgrams(data, vocab_size, window_size=window_size, sampling_table=sampling_table)\n", | |
"\n", | |
"word_target, word_context = zip(*couples)\n", | |
"word_target = np.array(word_target, dtype=\"int32\")\n", | |
"word_context = np.array(word_context, dtype=\"int32\")\n", | |
"\n", | |
"print(couples[:10], labels[:10])" | |
], | |
"execution_count": 2, | |
"outputs": [ | |
{ | |
"output_type": "stream", | |
"name": "stdout", | |
"text": [ | |
"[[4386, 100], [134, 9561], [9002, 751], [4943, 589], [3677, 9827], [611, 207], [370, 7437], [3099, 2566], [454, 3772], [101, 110]] [1, 0, 0, 0, 0, 1, 0, 0, 0, 1]\n" | |
] | |
} | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "Wy2tYyEL4ft9", | |
"outputId": "9ec15485-4b36-4f88-f548-fdcf51a0d2c1", | |
"colab": { | |
"base_uri": "https://localhost:8080/", | |
"height": 239 | |
} | |
}, | |
"cell_type": "code", | |
"source": [ | |
"# create some input variables\n", | |
"input_target = Input((1,))\n", | |
"input_context = Input((1,))\n", | |
"\n", | |
"embedding = Embedding(vocab_size, vector_dim, input_length=1, name='embedding')\n", | |
"target = embedding(input_target)\n", | |
"target = Reshape((vector_dim, 1))(target)\n", | |
"\n", | |
"context = embedding(input_context)\n", | |
"context = Reshape((vector_dim, 1))(context)\n", | |
"\n", | |
"# setup a cosine similarity operation which will be output in a secondary model \n", | |
"similarity = dot([target, context], normalize=True, axes=0) #(download older TF to fix error, or follow error recommendations)\n", | |
"\n", | |
"# now perform the dot product operation to get a similarity measure\n", | |
"\n", | |
"dot_product = dot([target, context], axes=1)\n", | |
"dot_product = Reshape((1,))(dot_product)\n", | |
"\n", | |
"# add the sigmoid output layer\n", | |
"output = Dense(1, activation='sigmoid')(dot_product)\n", | |
"\n", | |
"# create the primary training model\n", | |
"model = Model(inputs=[input_target, input_context], outputs=output)\n", | |
"model.compile(loss='binary_crossentropy', optimizer='rmsprop')\n", | |
"\n", | |
"# create a secondary validation model to run our similarity checks during training\n", | |
"validation_model = Model(inputs=[input_target, input_context], outputs=similarity)\n", | |
"\n", | |
"class SimilarityCallback:\n", | |
" def run_sim(self):\n", | |
" for i in range(valid_size):\n", | |
" valid_word = reverse_dictionary[valid_examples[i]]\n", | |
" top_k = 8 # number of nearest neighbors\n", | |
" sim = self._get_sim(valid_examples[i])\n", | |
" nearest = (-sim).argsort()[1:top_k + 1]\n", | |
" log_str = 'Nearest to %s:' % valid_word\n", | |
" for k in range(top_k):\n", | |
" close_word = reverse_dictionary[nearest[k]]\n", | |
" log_str = '%s %s,' % (log_str, close_word)\n", | |
" print(log_str)\n", | |
"\n", | |
" @staticmethod\n", | |
" def _get_sim(valid_word_idx):\n", | |
" sim = np.zeros((vocab_size,))\n", | |
" in_arr1 = np.zeros((1,))\n", | |
" in_arr2 = np.zeros((1,))\n", | |
" in_arr1[0,] = valid_word_idx\n", | |
" for i in range(vocab_size):\n", | |
" in_arr2[0,] = i\n", | |
" out = validation_model.predict_on_batch([in_arr1, in_arr2])\n", | |
" sim[i] = out\n", | |
" return sim\n", | |
" \n", | |
"sim_cb = SimilarityCallback()\n", | |
"\n", | |
"arr_1 = np.zeros((1,))\n", | |
"arr_2 = np.zeros((1,))\n", | |
"arr_3 = np.zeros((1,))\n", | |
"for cnt in range(epochs):\n", | |
" idx = np.random.randint(0, len(labels)-1)\n", | |
" arr_1[0,] = word_target[idx]\n", | |
" arr_2[0,] = word_context[idx]\n", | |
" arr_3[0,] = labels[idx]\n", | |
" loss = model.train_on_batch([arr_1, arr_2], arr_3)\n", | |
" if cnt % 100 == 0:\n", | |
" print(\"Iteration {}, loss={}\".format(cnt, loss))\n", | |
" if cnt % 10000 == 0:\n", | |
" sim_cb.run_sim()" | |
], | |
"execution_count": 4, | |
"outputs": [ | |
{ | |
"output_type": "error", | |
"ename": "TypeError", | |
"evalue": "ignored", | |
"traceback": [ | |
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", | |
"\u001b[0;31mTypeError\u001b[0m Traceback (most recent call last)", | |
"\u001b[0;32m<ipython-input-4-f387d5fa3585>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m 11\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 12\u001b[0m \u001b[0;31m# setup a cosine similarity operation which will be output in a secondary model (contains tiny error)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 13\u001b[0;31m \u001b[0msimilarity\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mdot\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mtarget\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mcontext\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mnormalize\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 14\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 15\u001b[0m \u001b[0;31m# now perform the dot product operation to get a similarity measure\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", | |
"\u001b[0;31mTypeError\u001b[0m: dot() missing 1 required positional argument: 'axes'" | |
] | |
} | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "1YEHjTsCoWt7" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"## ``par2vec``\n", | |
"\n", | |
"What about a vector representation for phrases/paragraphs/documents?\n", | |
"\n", | |
"The ``par2vec`` approach for learning paragraph vectors is inspired by the methods for learning the word vectors. The inspiration is that the word vectors are asked to contribute to a prediction task about the next word in the sentence.\n", | |
"\n", | |
"We will consider a *paragraph* vector. The paragraph vectors are also\n", | |
"asked to contribute to the prediction task of the next word\n", | |
"given many contexts sampled from the paragraph.\n", | |
"\n", | |
"In ``par2vec`` framework, every paragraph is mapped to a unique vector, represented by a column in matrix D and every word is also mapped to a unique vector, represented by a column in matrix W. The paragraph vector and word vectors are averaged or concatenated to predict the next word in a context.\n", | |
"\n", | |
"<center>\n", | |
"<img src=\"https://github.com/DataScienceUB/DeepLearningMaster2019/blob/master/images/par2vec.png?raw=1\" alt=\"\" style=\"width: 500px;\"/> \n", | |
"(Source: https://cs.stanford.edu/~quocle/paragraph_vector.pdf)\n", | |
"</center>\n", | |
"\n", | |
"The paragraph token can be thought of as another word. It acts as a memory that remembers what is missing from the\n", | |
"current context – or the topic of the paragraph. For this reason, we often call this model the Distributed Memory\n", | |
"Model of Paragraph Vectors (PV-DM).\n", | |
"\n", | |
"The contexts are fixed-length and sampled from a sliding window over the paragraph. \n", | |
"\n", | |
"The paragraph vector is shared across all contexts generated from the same paragraph but not across paragraphs. \n", | |
"\n", | |
"The word vector matrix W, however, is shared across paragraphs. " | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "mIMHfFKFoWt8" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"At prediction time, one needs to perform an inference step to compute the paragraph vector for a new paragraph. This\n", | |
"is also obtained by gradient descent. In this step, the parameters for the rest of the model, the word vectors and\n", | |
"the softmax weights, are fixed." | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "89gB95WdoWt8" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"### Using GloVe pre-trained word embeddings for classification. 1-D Convolutions.\n", | |
"\n", | |
"GloVe (https://nlp.stanford.edu/pubs/glove.pdf) consists of a weighted least squares model that trains on global word-word co-occurrence counts and thus makes efficient use of statistics. The model produces a word vector space with meaningful sub-structure. It shows state-of-the-art performance on the word analogy task, and outperforms other current methods on several word similarity tasks.\n", | |
"\n", | |
"GloVe consists of a weighted least squares model that trains on global word-word co-occurrence counts and thus makes efficient use of statistics. The model produces a word vector space with meaningful sub-structure. It shows state-of-the-art performance on the word analogy task, and outperforms other current methods on several word similarity tasks.\n", | |
"\n", | |
"\n", | |
"\n" | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "o3YVnEQKoWt9" | |
}, | |
"cell_type": "code", | |
"source": [ | |
"'''This script loads pre-trained word embeddings (GloVe embeddings)\n", | |
"into a frozen Keras Embedding layer, and uses it to\n", | |
"train a text classification model on the 20 Newsgroup dataset\n", | |
"(classication of newsgroup messages into 20 different categories).\n", | |
"GloVe embedding data can be found at:\n", | |
"http://nlp.stanford.edu/data/glove.6B.zip (822MB)\n", | |
"(source page: http://nlp.stanford.edu/projects/glove/)\n", | |
"20 Newsgroup data can be found at:\n", | |
"http://www.cs.cmu.edu/afs/cs.cmu.edu/project/theo-20/www/data/news20.html\n", | |
"'''\n", | |
"\n", | |
"from __future__ import print_function\n", | |
"\n", | |
"import os\n", | |
"import sys\n", | |
"import numpy as np\n", | |
"from keras.preprocessing.text import Tokenizer\n", | |
"from keras.preprocessing.sequence import pad_sequences\n", | |
"from keras.layers import Dense, Input, Flatten\n", | |
"from keras.layers import Conv1D, MaxPooling1D, Embedding\n", | |
"from keras.models import Model\n", | |
"\n", | |
"def to_categorical(y, num_classes=None):\n", | |
" \"\"\"Converts a class vector (integers) to binary class matrix.\n", | |
" E.g. for use with categorical_crossentropy.\n", | |
" # Arguments\n", | |
" y: class vector to be converted into a matrix\n", | |
" (integers from 0 to num_classes).\n", | |
" num_classes: total number of classes.\n", | |
" # Returns\n", | |
" A binary matrix representation of the input.\n", | |
" \"\"\"\n", | |
" y = np.array(y, dtype='int').ravel()\n", | |
" if not num_classes:\n", | |
" num_classes = np.max(y) + 1\n", | |
" n = y.shape[0]\n", | |
" categorical = np.zeros((n, num_classes))\n", | |
" categorical[np.arange(n), y] = 1\n", | |
" return categorical\n", | |
"\n", | |
"BASE_DIR = ''\n", | |
"GLOVE_DIR = BASE_DIR + '/glove.6B/'\n", | |
"TEXT_DATA_DIR = BASE_DIR + '/20_newsgroup/'\n", | |
"MAX_SEQUENCE_LENGTH = 1000\n", | |
"MAX_NB_WORDS = 20000\n", | |
"EMBEDDING_DIM = 100\n", | |
"VALIDATION_SPLIT = 0.2\n", | |
"\n", | |
"# first, build index mapping words in the embeddings set\n", | |
"# to their embedding vector\n", | |
"\n", | |
"print('Indexing word vectors.')\n", | |
"\n", | |
"embeddings_index = {}\n", | |
"f = open(os.path.join(GLOVE_DIR, 'glove.6B.100d.txt'))\n", | |
"for line in f:\n", | |
" values = line.split()\n", | |
" word = values[0]\n", | |
" coefs = np.asarray(values[1:], dtype='float32')\n", | |
" embeddings_index[word] = coefs\n", | |
"f.close()\n", | |
"\n", | |
"print('Found %s word vectors.' % len(embeddings_index))\n", | |
"\n", | |
"# second, prepare text samples and their labels\n", | |
"print('Processing text dataset')\n", | |
"\n", | |
"texts = [] # list of text samples\n", | |
"labels_index = {} # dictionary mapping label name to numeric id\n", | |
"labels = [] # list of label ids\n", | |
"for name in sorted(os.listdir(TEXT_DATA_DIR)):\n", | |
" path = os.path.join(TEXT_DATA_DIR, name)\n", | |
" if os.path.isdir(path):\n", | |
" label_id = len(labels_index)\n", | |
" labels_index[name] = label_id\n", | |
" for fname in sorted(os.listdir(path)):\n", | |
" if fname.isdigit():\n", | |
" fpath = os.path.join(path, fname)\n", | |
" if sys.version_info < (3,):\n", | |
" f = open(fpath)\n", | |
" else:\n", | |
" f = open(fpath, encoding='latin-1')\n", | |
" t = f.read()\n", | |
" i = t.find('\\n\\n') # skip header\n", | |
" if 0 < i:\n", | |
" t = t[i:]\n", | |
" texts.append(t)\n", | |
" f.close()\n", | |
" labels.append(label_id)\n", | |
"\n", | |
"print('Found %s texts.' % len(texts))\n", | |
"\n", | |
"# finally, vectorize the text samples into a 2D integer tensor\n", | |
"tokenizer = Tokenizer(num_words=MAX_NB_WORDS)\n", | |
"tokenizer.fit_on_texts(texts)\n", | |
"sequences = tokenizer.texts_to_sequences(texts)\n", | |
"\n", | |
"word_index = tokenizer.word_index\n", | |
"print('Found %s unique tokens.' % len(word_index))\n", | |
"\n", | |
"data = pad_sequences(sequences, maxlen=MAX_SEQUENCE_LENGTH)\n", | |
"\n", | |
"labels = to_categorical(np.asarray(labels))\n", | |
"print('Shape of data tensor:', data.shape)\n", | |
"print('Shape of label tensor:', labels.shape)\n", | |
"\n", | |
"# split the data into a training set and a validation set\n", | |
"indices = np.arange(data.shape[0])\n", | |
"np.random.shuffle(indices)\n", | |
"data = data[indices]\n", | |
"labels = labels[indices]\n", | |
"num_validation_samples = int(VALIDATION_SPLIT * data.shape[0])\n", | |
"\n", | |
"x_train = data[:-num_validation_samples]\n", | |
"y_train = labels[:-num_validation_samples]\n", | |
"x_val = data[-num_validation_samples:]\n", | |
"y_val = labels[-num_validation_samples:]\n", | |
"\n", | |
"print('Preparing embedding matrix.')\n", | |
"\n", | |
"# prepare embedding matrix\n", | |
"num_words = min(MAX_NB_WORDS, len(word_index))\n", | |
"embedding_matrix = np.zeros((num_words, EMBEDDING_DIM))\n", | |
"for word, i in word_index.items():\n", | |
" if i >= MAX_NB_WORDS:\n", | |
" continue\n", | |
" embedding_vector = embeddings_index.get(word)\n", | |
" if embedding_vector is not None:\n", | |
" # words not found in embedding index will be all-zeros.\n", | |
" embedding_matrix[i] = embedding_vector\n", | |
"\n", | |
"# load pre-trained word embeddings into an Embedding layer\n", | |
"# note that we set trainable = False so as to keep the embeddings fixed\n", | |
"embedding_layer = Embedding(num_words,\n", | |
" EMBEDDING_DIM,\n", | |
" weights=[embedding_matrix],\n", | |
" input_length=MAX_SEQUENCE_LENGTH,\n", | |
" trainable=False)\n", | |
"\n", | |
"print('Training model.')\n", | |
"\n", | |
"# train a 1D convnet with global maxpooling\n", | |
"sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')\n", | |
"embedded_sequences = embedding_layer(sequence_input)\n", | |
"x = Conv1D(128, 5, activation='relu')(embedded_sequences)\n", | |
"x = MaxPooling1D(5)(x)\n", | |
"x = Conv1D(128, 5, activation='relu')(x)\n", | |
"x = MaxPooling1D(5)(x)\n", | |
"x = Conv1D(128, 5, activation='relu')(x)\n", | |
"x = MaxPooling1D(35)(x)\n", | |
"x = Flatten()(x)\n", | |
"x = Dense(128, activation='relu')(x)\n", | |
"preds = Dense(len(labels_index), activation='softmax')(x)\n", | |
"\n", | |
"model = Model(sequence_input, preds)\n", | |
"model.compile(loss='categorical_crossentropy',\n", | |
" optimizer='rmsprop',\n", | |
" metrics=['acc'])\n", | |
"\n", | |
"model.fit(x_train, y_train,\n", | |
" batch_size=128,\n", | |
" epochs=10,\n", | |
" validation_data=(x_val, y_val))" | |
], | |
"execution_count": null, | |
"outputs": [] | |
}, | |
{ | |
"metadata": { | |
"id": "lTyFm8wwQh83" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"## Are word embeddings still useful?\n", | |
"\n", | |
"The word embeding models we have seen have several limitations:\n", | |
"\n", | |
"+ **Word2Vec** and **Glove** handle whole words, and can't easily handle words they haven't seen before. \n", | |
"+ Words can be ambigous, but we are assigning only one embedding. Embeddings don't depend on the context.\n", | |
"\n", | |
"**FastText** (based on Word2Vec) is word-fragment (character) based and can usually handle unseen words, although it still generates one vector per word. \n", | |
"\n", | |
"Lately, several new \"ontext-aware\" models have been proposed.\n", | |
"\n", | |
"**ELMo** and **BERT** incorporate context, handling polysemy and nuance much better (e.g. sentences like \"Time flies like an arrow. Fruit flies like bananas\") . This in general improves performance notably on downstream tasks.\n", | |
"\n", | |
"For natural language tasks, **ELMo** and **BERT** represent the best option at this time. For other kinds of tasks (for example, item-embedding for recommendert systems), **Word2Vec** is still an alternative. \n", | |
"\n" | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "n4i0LQ-ETCcL" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"## Bibliography\n", | |
"\n", | |
"On word embeddings (Part I, II and III): http://ruder.io/word-embeddings-1/" | |
] | |
} | |
] | |
} |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment