Last active
August 29, 2015 14:02
-
-
Save mjbommar/6ead82d57f2cf5ec94ba to your computer and use it in GitHub Desktop.
Fuzzy sentence matching in Python - Bommarito Consulting, LLC: http://bommaritollc.com/2014/06/fuzzy-match-sentences-in-python
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# ## IPython Notebook for [Bommarito Consulting](http://bommaritollc.com/) Blog Post | |
# ### **Link**: [Fuzzy sentence matching in Python](http://bommaritollc.com/2014/06/fuzzy-match-sentences-in-python): http://bommaritollc.com/2014/06/fuzzy-match-sentences-in-python | |
# **Author**: [Michael J. Bommarito II](https://www.linkedin.com/in/bommarito/) | |
# Imports | |
import nltk.corpus | |
import nltk.tokenize.punkt | |
import string | |
# Get default English stopwords and extend with punctuation | |
stopwords = nltk.corpus.stopwords.words('english') | |
stopwords.extend(string.punctuation) | |
stopwords.append('') | |
# Create tokenizer | |
tokenizer = nltk.tokenize.punkt.PunktWordTokenizer() | |
def is_ci_token_stopword_match(a, b): | |
"""Check if a and b are matches.""" | |
tokens_a = [token.lower().strip(string.punctuation) for token in tokenizer.tokenize(a) \ | |
if token.lower().strip(string.punctuation) not in stopwords] | |
tokens_b = [token.lower().strip(string.punctuation) for token in tokenizer.tokenize(b) \ | |
if token.lower().strip(string.punctuation) not in stopwords] | |
return (tokens_a == tokens_b) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment