Created
December 4, 2019 08:53
-
-
Save mullikine/6cd03d7c0a2604f58e2a7ac70e801ffe to your computer and use it in GitHub Desktop.
Predictive text with concatenated word vectors https://gist.github.com/aparrish/29574a35a51e955d4c6284da42b8f53a
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# coding: utf-8 | |
# # Predictive text with concatenated word vectors | |
# | |
# By [Allison Parrish](http://www.decontextualize.com/) | |
# | |
# This notebook demonstrates one way to implement predictive text like [iOS QuickType](https://www.apple.com/sg/ios/whats-new/quicktype/). It works sort of like a [Markov chain text generator](https://github.com/aparrish/rwet/blob/master/ngrams-and-markov-chains.ipynb), but uses nearest-neighbor lookups on a database of concatenated [word vectors](https://github.com/aparrish/rwet/blob/master/understanding-word-vectors.ipynb) instead of n-grams of tokens. You can build this database with any text you want! | |
# | |
# To get this code to work, you'll need to [install spaCy](https://spacy.io/usage/#section-quickstart), download a [spaCy model with word vectors](https://spacy.io/usage/models#available) (like `en_core_web_lg`). You'll also need [Simple Neighbors](https://github.com/aparrish/simpleneighbors), a Python library I made for easy nearest neighbor lookups: | |
get_ipython().system('pip install simpleneighbors') | |
# ## How it works | |
# | |
# The goal of a predictive text interface is to look at what the user has typed so far and then suggest the word that is most likely to come next. The system in this notebook does this by looking at each sequence of words of a particular length `n`, and then looking up the word vector in spaCy for each of those words, concatenating them to create one long vector. It then stores that vector along with the word that *follows* the sequence. | |
# | |
# To calculate suggestions for a particular text from this database, you can just look at the last `n` words in the text, concatenate the word vectors for that stretch of words, and then find the entries in the database whose vector is nearest. The words stored along with those sequences (i.e., the words that followed the original sequence) are the words the system suggests as most likely to come next. | |
# | |
# So let's implement it! First, we'll import the libraries we need: | |
from simpleneighbors import SimpleNeighbors | |
import spacy | |
# And then load the spaCy model. (This will take a few seconds.) | |
nlp = spacy.load('en_core_web_lg') | |
# You'll need to have some text to use to build the database. If you're following along, download a [plain text file from Project Gutenberg](https://www.gutenberg.org/) to the same directory as this notebook and put its filename below. | |
filename = "1342-0.txt" | |
# When you're parsing a text with spaCy, it can use up a lot of memory and either throw "out of memory" errors or cause your computer to slow down as it swaps memory to disk. To ameliorate this, we're only going to train on the first 500k characters of the text. You can change the number in the cell below if you want even fewer characters (or more). | |
cutoff = 500000 | |
# The code in the cell below parses your text file into sentences (this might take a few seconds): | |
doc = nlp(open(filename).read()[:cutoff], | |
disable=['tagger']) | |
# The `concatenate_vectors` function below takes a sequence of spaCy tokens (like those that you get when you parse a text) and returns the *concatenated* word vectors of those tokens. "Concatenating" vectors means to make one big vector from several smaller vectors simply by lining them all up. For example, if you had three 2D vectors `a`, `b`, and `c`: | |
# | |
# a = (1, 2) | |
# b = (5, 6) | |
# c = (11, 12) | |
# | |
# The concatenation of these vectors would be this six-dimensional vector: | |
# | |
# (1, 2, 5, 6, 11, 12) | |
import numpy as np | |
def concatenate_vectors(seq): | |
return np.concatenate(np.array([w.vector for w in seq]), axis=0) | |
concatenate_vectors(nlp("hello there")).shape | |
# Using vectors instead of tokens is a simple way of coping with predicting the next word even for sequences that aren't found in the source text. Using concatenated vectors facilitates finding entries that have both similar meanings and similar word orders (which is important when predicting the next word in a text). | |
# | |
# The code in the cell below builds the nearest neighbor index that maps the concatenated vectors for each sequence of words in the source text to the word that follows. You can adjust `n` to change the length of the sequence considered. (In my experiments, values from 2–4 usually work best.) | |
n = 3 | |
nns = SimpleNeighbors(n*300) | |
for seq in doc.sents: | |
seq = [item for item in seq if item.is_alpha] | |
for i in range(len(seq)-n): | |
mean = concatenate_vectors(seq[i:i+n]) | |
next_item = seq[i+n].text | |
nns.add_one(next_item, mean) | |
nns.build() | |
# Once the index is built, you can test it out! Plug in a phrase with three words into the `start` variable below and run the cell. You'll see the top-ten most likely words to come next, as suggested by the nearest neighbor lookup. | |
start = "I have never" | |
nns.nearest(concatenate_vectors(nlp(start))) | |
# ## Interactive web version | |
# | |
# The code below starts a [Flask](http://flask.pocoo.org/) web server on your computer to serve up an interactive version of the suggestion code. Run the cell and click on the link that appears below. If you make changes, make sure to interrupt the kernel before re-running the cell. You can interrupt the kernel either via the menu bar (`Kernel > Interrupt`) or by hitting Escape and typing `i` twice. | |
autocomplete_html = """ | |
<style type="text/css"> | |
* { | |
box-sizing: border-box; | |
font-family: sans-serif; | |
} | |
#suggestions { | |
width: 33%; | |
} | |
#suggestions p { | |
margin: 0; | |
width: 20%; | |
background-color: #aaa; | |
color: white; | |
float: left; | |
padding: 5px; | |
border: 1px white solid; | |
text-align: center; | |
font-size: 16px; | |
} | |
</style> | |
<textarea id="typehere" placeholder="type here!" | |
style="width: 33%; | |
padding: 0.5em; | |
font-family: sans-serif; | |
font-size: 16px;" | |
rows="16"></textarea> | |
<div id="suggestions"> | |
(suggestions will appear here) | |
</div> | |
<script> | |
function createChoice(val) { | |
let tn = document.createTextNode(val); | |
let ptag = document.createElement('p'); | |
ptag.appendChild(tn); | |
ptag.onclick = function() { | |
addText(" " + val); | |
} | |
return ptag; | |
} | |
function addText(newText) { | |
let el = document.querySelector('#typehere'); | |
var start = el.selectionStart | |
var end = el.selectionEnd | |
var text = el.value | |
var before = text.substring(0, start) | |
var after = text.substring(end, text.length) | |
el.value = (before + newText + after) | |
el.selectionStart = el.selectionEnd = start + newText.length | |
el.focus() | |
el.onkeyup() | |
} | |
document.querySelector('#typehere').onkeyup = async function() { | |
console.log("hi"); | |
let el = document.querySelector('#typehere'); | |
let val = el.value; | |
var start = el.selectionStart | |
var end = el.selectionEnd | |
var text = el.value | |
var before = text.substring(0, start) | |
let resp = await getResp(before); | |
console.log(resp); | |
let suggestdiv = document.getElementById("suggestions"); | |
suggestdiv.innerHTML = ""; | |
for (let s of resp) { | |
suggestdiv.appendChild(createChoice(s)) | |
} | |
}; | |
async function getResp(val) { | |
let resp = await fetch("/suggest.json?text=" + | |
encodeURIComponent(val)); | |
let data = await resp.json(); | |
return data['suggestions']; | |
} | |
</script> | |
""" | |
from flask import Flask, request, jsonify | |
app = Flask(__name__) | |
@app.route("/suggest.json") | |
def suggest(): | |
text = request.args['text'] | |
parsed = list(nlp(text, disable=['tagger', 'parser'])) | |
if len(parsed) >= n: | |
suggestions = nns.nearest(concatenate_vectors(parsed[-n:]), 5) | |
else: | |
suggestions = [] | |
return jsonify( | |
{'suggestions': suggestions}) | |
@app.route("/") | |
def home(): | |
return autocomplete_html | |
app.run() | |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment