Last active
August 29, 2015 14:02
-
-
Save mjbommar/b3af12e87457b93872bd to your computer and use it in GitHub Desktop.
Fuzzy sentence matching in Python - Bommarito Consulting, LLC: http://bommaritollc.com/2014/06/fuzzy-match-sentences-in-python
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# ## IPython Notebook for [Bommarito Consulting](http://bommaritollc.com/) Blog Post | |
# ### **Link**: [Fuzzy sentence matching in Python](http://bommaritollc.com/2014/06/fuzzy-match-sentences-in-python): http://bommaritollc.com/2014/06/fuzzy-match-sentences-in-python | |
# **Author**: [Michael J. Bommarito II](https://www.linkedin.com/in/bommarito/) | |
# Imports | |
import nltk.corpus | |
import nltk.tokenize.punkt | |
import nltk.stem.snowball | |
import string | |
# Get default English stopwords and extend with punctuation | |
stopwords = nltk.corpus.stopwords.words('english') | |
stopwords.extend(string.punctuation) | |
stopwords.append('') | |
# Create tokenizer and stemmer | |
tokenizer = nltk.tokenize.punkt.PunktWordTokenizer() | |
stemmer = nltk.stem.snowball.SnowballStemmer('english') | |
def is_ci_token_stopword_stem_match(a, b): | |
"""Check if a and b are matches.""" | |
tokens_a = [token.lower().strip(string.punctuation) for token in tokenizer.tokenize(a) \ | |
if token.lower().strip(string.punctuation) not in stopwords] | |
tokens_b = [token.lower().strip(string.punctuation) for token in tokenizer.tokenize(b) \ | |
if token.lower().strip(string.punctuation) not in stopwords] | |
stems_a = [stemmer.stem(token) for token in tokens_a] | |
stems_b = [stemmer.stem(token) for token in tokens_b] | |
return (stems_a == stems_b) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment