Created
March 5, 2010 16:51
-
-
Save onyxfish/322906 to your computer and use it in GitHub Desktop.
Basic example of using NLTK for name entity extraction.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import nltk | |
with open('sample.txt', 'r') as f: | |
sample = f.read() | |
sentences = nltk.sent_tokenize(sample) | |
tokenized_sentences = [nltk.word_tokenize(sentence) for sentence in sentences] | |
tagged_sentences = [nltk.pos_tag(sentence) for sentence in tokenized_sentences] | |
chunked_sentences = nltk.batch_ne_chunk(tagged_sentences, binary=True) | |
def extract_entity_names(t): | |
entity_names = [] | |
if hasattr(t, 'node') and t.node: | |
if t.node == 'NE': | |
entity_names.append(' '.join([child[0] for child in t])) | |
else: | |
for child in t: | |
entity_names.extend(extract_entity_names(child)) | |
return entity_names | |
entity_names = [] | |
for tree in chunked_sentences: | |
# Print results per sentence | |
# print extract_entity_names(tree) | |
entity_names.extend(extract_entity_names(tree)) | |
# Print all entity names | |
#print entity_names | |
# Print unique entity names | |
print set(entity_names) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi and thanks for the code. I tried the version 'ririw commented on 3 Jul 2015'. I got syntax error on the last line where it was converting to unique names. If I deleted set then it worked and that was actually better for me because my need was to list the names by frequency. I tried it on an Icelandic saga, Laxdæla ant it worked fine. I added a dictionary to achieve unique names and a line to sort them by value. Here is the adapted code:
import nltk
with open('laxd.txt', 'r') as f:
sample = f.read()
sentences = nltk.sent_tokenize(sample)
tokenized_sentences = [nltk.word_tokenize(sentence) for sentence in sentences]
tagged_sentences = [nltk.pos_tag(sentence) for sentence in tokenized_sentences]
chunked_sentences = nltk.ne_chunk_sents(tagged_sentences, binary=True)
def extract_entity_names(t):
entity_names = []
entity_names = []
names = {}
for tree in chunked_sentences:
# Print results per sentence
# print extract_entity_names(tree)
entity_names.extend(extract_entity_names(tree))
for w in entity_names:
names[w] = names.get(w, 0) +1
Print all entity names
print(sorted(names.items(), key=lambda x:x[1], reverse=True))