Skip to content

Instantly share code, notes, and snippets.

@MLWhiz
Created February 9, 2019 08:01
Show Gist options
  • Save MLWhiz/3d3758173fb44e1f4dbd858d7c5fa6b4 to your computer and use it in GitHub Desktop.
Save MLWhiz/3d3758173fb44e1f4dbd858d7c5fa6b4 to your computer and use it in GitHub Desktop.
from nltk.stem import WordNetLemmatizer
from nltk.tokenize.toktok import ToktokTokenizer
def lemma_text(text):
tokenizer = ToktokTokenizer()
tokens = tokenizer.tokenize(text)
tokens = [token.strip() for token in tokens]
tokens = [wordnet_lemmatizer.lemmatize(token) for token in tokens]
return ' '.join(tokens)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment