Created
March 11, 2014 09:44
-
-
Save sangheestyle/9482553 to your computer and use it in GitHub Desktop.
python and pig: simple word counter for Hadoop
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/usr/bin/env python | |
import sys | |
import string | |
exclude = set(string.punctuation) | |
for line in sys.stdin: | |
line = line.strip() | |
line = ''.join(ch for ch in line if ch not in exclude) | |
line = ''.join([i for i in line if not i.isdigit()]) | |
line = line.lower() | |
words = line.split() | |
for word in words: | |
print '%s\t%s' % (word, 1) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/usr/bin/env python | |
from operator import itemgetter | |
import sys | |
current_word = None | |
current_count = 0 | |
word = None | |
# input comes from STDIN | |
for line in sys.stdin: | |
# remove leading and trailing whitespace | |
line = line.strip() | |
# parse the input we got from mapper.py | |
word, count = line.split('\t', 1) | |
# convert count (currently a string) to int | |
try: | |
count = int(count) | |
except ValueError: | |
# count was not a number, so silently | |
# ignore/discard this line | |
continue | |
# this IF-switch only works because Hadoop sorts map output | |
# by key (here: word) before it is passed to the reducer | |
if current_word == word: | |
current_count += count | |
else: | |
if current_word: | |
# write result to STDOUT | |
print '%s\t%s' % (current_word, current_count) | |
current_count = count | |
current_word = word | |
# do not forget to output the last word if needed! | |
if current_word == word: | |
print '%s\t%s' % (current_word, current_count) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
a = LOAD '$INPUT' AS (foo:chararray); | |
b1 = FOREACH a GENERATE TOKENIZE(foo, ' ') | |
AS tokens: {t:(word: chararray)}; | |
b2 = FOREACH b1 { | |
cleaned = FOREACH tokens GENERATE | |
FLATTEN(REGEX_EXTRACT_ALL(LOWER(word),'.*?([a-z]+).*?')) | |
AS word ; | |
GENERATE FLATTEN(cleaned); | |
} | |
c = GROUP b2 BY word; | |
d = FOREACH c GENERATE COUNT(b2) AS counts, group AS word; | |
e = ORDER d BY counts DESC; | |
STORE e INTO '$OUTPUT'; |
Output (with Pig):
906481 the
638893 and
594537 to
412530 you
388890 of
347708 a
301226 your
262320 in
251804 is
241811 for
187264 with
173763 on
149539 this
147917 can
145485 or
145249 it
144848 app
117014 are
104488 will
102854 that
101285 as
94020 by
92305 from
91746 be
79544 all
74109 have
69205 game
67582 free
65928 more
63014 not
...
Future actions
a. removing stop words (http://en.wikipedia.org/wiki/Stop_words)
b. how can I do descending sort in python streaming program?
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Think about it.
Hadoop MapReduce uses acending order when it make data sorted. The difference between two program, written in Python and Pig, is sort order. So far to me, the python streamming is hard to make the result in decending order.(If you think it is false, please let me know how to do) Whereas, using pig programming is easy to make the result in ascending or descending order.
Amazon EMR is easier to run your MapReduce Program but it takes over than 3 minutes when it is booting it's virtual instance. Whereas, using Hadoop in local requires installing Hadoop and Pig, testing installation, and checking configuration, but it is very responsive when your system is very fast and your dataset is not too big. But, to me, running pig in my local machine has been problem.
Hadoop makes error when you run your MapReduce program with output path which is already existed. Therefore, you need to check your output path.