Last active
August 29, 2015 14:10
-
-
Save elyase/279acb1b2f62009cfad1 to your computer and use it in GitHub Desktop.
Data Exploration Using Spark
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| { | |
| "metadata": { | |
| "name": "", | |
| "signature": "sha256:fc375f802d645c72e845238930ecdacfad47a7de504360da2060ea64c96c07ac" | |
| }, | |
| "nbformat": 3, | |
| "nbformat_minor": 0, | |
| "worksheets": [ | |
| { | |
| "cells": [ | |
| { | |
| "cell_type": "markdown", | |
| "metadata": {}, | |
| "source": [ | |
| "In this chapter, we will first use the Spark shell to interactively explore the Wikipedia data. Then, we will give a brief introduction to writing standalone Spark programs.\n", | |
| "\n", | |
| "## Prerequisite: getting the dataset\n", | |
| "Please follow the instructions on the [Getting Started](http://ampcamp.berkeley.edu/5/exercises/getting-started.html) page to download and unpack the `training-downloads.zip` file.\n", | |
| "\n", | |
| "## Interactive Analysis\n", | |
| "Let\u2019s now use Spark to do some order statistics on the data set. First, copy this notebook in the main directory and launch ipython at the terminal with:" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": {}, | |
| "source": [ | |
| "```bash\n", | |
| "$ IPYTHON_OPTS=\"notebook --pylab inline\" ./bin/pyspark --master \"local[4]\"\n", | |
| "```" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": {}, | |
| "source": [ | |
| "1-Warm up by creating an RDD (Resilient Distributed Dataset) named pagecounts from the input files. In the Spark shell, the SparkContext is already created for you as variable sc." | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "collapsed": false, | |
| "input": [ | |
| "pagecounts = sc.textFile(\"pagecounts/\")\n", | |
| "pagecounts" | |
| ], | |
| "language": "python", | |
| "metadata": {}, | |
| "outputs": [ | |
| { | |
| "metadata": {}, | |
| "output_type": "pyout", | |
| "prompt_number": 3, | |
| "text": [ | |
| "pagecounts/ MappedRDD[3] at textFile at NativeMethodAccessorImpl.java:-2" | |
| ] | |
| } | |
| ], | |
| "prompt_number": 3 | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": {}, | |
| "source": [ | |
| "2-Let\u2019s take a peek at the data. You can use the take operation of an RDD to get the first K records. Here, K = 10." | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "collapsed": false, | |
| "input": [ | |
| "pagecounts.take(10)" | |
| ], | |
| "language": "python", | |
| "metadata": {}, | |
| "outputs": [ | |
| { | |
| "metadata": {}, | |
| "output_type": "pyout", | |
| "prompt_number": 4, | |
| "text": [ | |
| "[u'20090505-000000 aa Main_Page 2 9980',\n", | |
| " u'20090505-000000 ab %D0%90%D0%B8%D0%BD%D1%82%D0%B5%D1%80%D0%BD%D0%B5%D1%82 1 465',\n", | |
| " u'20090505-000000 ab %D0%98%D1%85%D0%B0%D0%B4%D0%BE%D1%83_%D0%B0%D0%B4%D0%B0%D2%9F%D1%8C%D0%B0 1 16086',\n", | |
| " u'20090505-000000 af.b Tuisblad 1 36236',\n", | |
| " u'20090505-000000 af.d Tuisblad 4 189738',\n", | |
| " u'20090505-000000 af.q Tuisblad 2 56143',\n", | |
| " u'20090505-000000 af Afrika 1 46833',\n", | |
| " u'20090505-000000 af Afrikaans 2 53577',\n", | |
| " u'20090505-000000 af Australi%C3%AB 1 132432',\n", | |
| " u'20090505-000000 af Barack_Obama 1 23368']" | |
| ] | |
| } | |
| ], | |
| "prompt_number": 4 | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": {}, | |
| "source": [ | |
| "Unfortunately this is not very readable because take() returns an array and Scala simply prints the array with each element separated by a comma. We can make it prettier by traversing the array to print each record on its own line." | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "collapsed": false, | |
| "input": [ | |
| "for x in pagecounts.take(10):\n", | |
| " print x" | |
| ], | |
| "language": "python", | |
| "metadata": {}, | |
| "outputs": [ | |
| { | |
| "output_type": "stream", | |
| "stream": "stdout", | |
| "text": [ | |
| "20090505-000000 aa Main_Page 2 9980\n", | |
| "20090505-000000 ab %D0%90%D0%B8%D0%BD%D1%82%D0%B5%D1%80%D0%BD%D0%B5%D1%82 1 465\n", | |
| "20090505-000000 ab %D0%98%D1%85%D0%B0%D0%B4%D0%BE%D1%83_%D0%B0%D0%B4%D0%B0%D2%9F%D1%8C%D0%B0 1 16086\n", | |
| "20090505-000000 af.b Tuisblad 1 36236\n", | |
| "20090505-000000 af.d Tuisblad 4 189738\n", | |
| "20090505-000000 af.q Tuisblad 2 56143\n", | |
| "20090505-000000 af Afrika 1 46833\n", | |
| "20090505-000000 af Afrikaans 2 53577\n", | |
| "20090505-000000 af Australi%C3%AB 1 132432\n", | |
| "20090505-000000 af Barack_Obama 1 23368\n" | |
| ] | |
| } | |
| ], | |
| "prompt_number": 5 | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": {}, | |
| "source": [ | |
| "3-Let\u2019s see how many records in total are in this data set (this command will take a while, so read ahead while it is running)." | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "collapsed": false, | |
| "input": [ | |
| "pagecounts.count()" | |
| ], | |
| "language": "python", | |
| "metadata": {}, | |
| "outputs": [ | |
| { | |
| "metadata": {}, | |
| "output_type": "pyout", | |
| "prompt_number": 7, | |
| "text": [ | |
| "1398882" | |
| ] | |
| } | |
| ], | |
| "prompt_number": 7 | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": {}, | |
| "source": [ | |
| "This should launch 2 tasks on the your Spark cluster. If you look closely at the terminal, the console log is pretty chatty and tells you the progress of the tasks.\n", | |
| "\n", | |
| "While it\u2019s running, you can open the Spark web console to see the progress. To do this, open your favorite browser, and type in the following URL.\n", | |
| "\n", | |
| "http://localhost:4040\n", | |
| "\n", | |
| "Note that this page is only available if you have an active job or Spark shell.\n", | |
| "You should see the Spark application status web interface, similar to the following:\n", | |
| "\n", | |
| "The links in this interface allow you to track the job\u2019s progress and various metrics about its execution, including task durations and cache statistics.\n", | |
| "\n", | |
| "When your query finishes running, it should return the following count:\n", | |
| " \n", | |
| " 1398882\n", | |
| " \n", | |
| "1-Recall from above when we described the format of the data set, that the second field is the \u201cproject code\u201d and contains information about the language of the pages. For example, the project code \u201cen\u201d indicates an English page. Let\u2019s derive an RDD containing only English pages from pagecounts. This can be done by applying a filter function to pagecounts. For each record, we can split it by the field delimiter (i.e. a space) and get the second field-\u2013 and then compare it with the string \u201cen\u201d.\n", | |
| "\n", | |
| "To avoid reading from disks each time we perform any operations on the RDD, we also cache the RDD into memory. This is where Spark really starts to to shine." | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "collapsed": false, | |
| "input": [ | |
| "enPages = pagecounts.filter(lambda x: x.split(\" \")[1] == \"en\").cache()" | |
| ], | |
| "language": "python", | |
| "metadata": {}, | |
| "outputs": [], | |
| "prompt_number": 8 | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": {}, | |
| "source": [ | |
| "When you type this command into the Spark shell, Spark defines the RDD, but because of lazy evaluation, no computation is done yet. Next time any action is invoked on enPages, Spark will cache the data set in memory across the workers in your cluster.\n", | |
| "\n", | |
| "2-How many records are there for English pages?" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": {}, | |
| "source": [ | |
| "The first time this command is run, similar to the last count we did, it will take 2 - 3 minutes while Spark scans through the entire data set on disk. But since enPages was marked as \u201ccached\u201d in the previous step, if you run count on the same RDD again, it should return an order of magnitude faster.\n", | |
| "\n", | |
| "If you examine the console log closely, you will see lines like this, indicating some data was added to the cache:\n", | |
| "```\n", | |
| "13/02/05 20:29:01 INFO storage.BlockManagerMasterActor$BlockManagerInfo: Added rdd_2_172 in memory on ip-10-188-18-127.ec2.internal:42068 (size: 271.8 MB, free: 5.5 GB)\n", | |
| "Let\u2019s try something fancier. Generate a histogram of total page views on Wikipedia English pages for the date range represented in our dataset (May 5 to May 7, 2009). The high level idea of what we\u2019ll be doing is as follows. First, we generate a key value pair for each line; the key is the date (the first eight characters of the first field), and the value is the number of pageviews for that date (the fourth field).\n", | |
| "```" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "collapsed": false, | |
| "input": [ | |
| ">>> enTuples = enPages.map(lambda x: x.split(\" \"))\n", | |
| ">>> enKeyValuePairs = enTuples.map(lambda x: (x[0][:8], int(x[3])))" | |
| ], | |
| "language": "python", | |
| "metadata": {}, | |
| "outputs": [], | |
| "prompt_number": 9 | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": {}, | |
| "source": [ | |
| "Next, we shuffle the data and group all values of the same key together. Finally we sum up the values for each key. There is a convenient method called reduceByKey in Spark for exactly this pattern. Note that the second argument to reduceByKey determines the number of reducers to use. By default, Spark assumes that the reduce function is commutative and associative and applies combiners on the mapper side. Since we know there is a very limited number of keys in this case (because there are only 3 unique dates in our data set), let\u2019s use only one reducer." | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "collapsed": false, | |
| "input": [ | |
| "enKeyValuePairs.reduceByKey(lambda x, y: x + y, 1).collect()" | |
| ], | |
| "language": "python", | |
| "metadata": {}, | |
| "outputs": [ | |
| { | |
| "metadata": {}, | |
| "output_type": "pyout", | |
| "prompt_number": 10, | |
| "text": [ | |
| "[(u'20090507', 6175726), (u'20090505', 7076855)]" | |
| ] | |
| } | |
| ], | |
| "prompt_number": 10 | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": {}, | |
| "source": [ | |
| "The collect method at the end converts the result from an RDD to an array.\n", | |
| "\n", | |
| "We can combine the previous three commands into one:" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "collapsed": false, | |
| "input": [ | |
| "enPages.map(lambda x: x.split(\" \"))\\\n", | |
| " .map(lambda x: (x[0][:8], int(x[3])))\\\n", | |
| " .reduceByKey(lambda x, y: x + y, 1)\\\n", | |
| " .collect()" | |
| ], | |
| "language": "python", | |
| "metadata": {}, | |
| "outputs": [ | |
| { | |
| "metadata": {}, | |
| "output_type": "pyout", | |
| "prompt_number": 14, | |
| "text": [ | |
| "[(u'20090507', 6175726), (u'20090505', 7076855)]" | |
| ] | |
| } | |
| ], | |
| "prompt_number": 14 | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": {}, | |
| "source": [ | |
| "Suppose we want to find pages that were viewed more than 200,000 times during the three days covered by our dataset. Conceptually, this task is similar to the previous query. But, given the large number of pages (23 million distinct page names), the new task is very expensive. We are doing an expensive group-by with a lot of network shuffling of data.\n", | |
| "\n", | |
| "To recap, first we split each line of data into its respective fields. Next, we extract the fields for page name and number of page views. We reduce by key again, this time with 40 reducers. Then we filter out pages with less than 200,000 total views over our time window represented by our dataset." | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "collapsed": false, | |
| "input": [ | |
| "enPages.map(lambda x: x.split(\" \"))\\\n", | |
| " .map(lambda x: (x[2], int(x[3])))\\\n", | |
| " .reduceByKey(lambda x, y: x + y, 40)\\\n", | |
| " .filter(lambda x: x[1] > 200000).map(lambda x: (x[1], x[0])).collect()" | |
| ], | |
| "language": "python", | |
| "metadata": {}, | |
| "outputs": [ | |
| { | |
| "metadata": {}, | |
| "output_type": "pyout", | |
| "prompt_number": 15, | |
| "text": [ | |
| "[(451126, u'Main_Page'), (1066734, u'404_error/'), (468159, u'Special:Search')]" | |
| ] | |
| } | |
| ], | |
| "prompt_number": 15 | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": {}, | |
| "source": [ | |
| "There is no hard and fast way to calculate the optimal number of reducers for a given problem; you will build up intuition over time by experimenting with different values.\n", | |
| "\n", | |
| "To leave the Spark shell, type exit at the prompt.\n", | |
| "\n", | |
| "You can explore the full RDD API by browsing the Java/Scala or Python API docs.\n", | |
| "\n", | |
| "## Running Standalone Spark Programs\n", | |
| "Because of time constraints, in this tutorial we focus on ad-hoc style analytics using the Spark shell. However, for many tasks, it makes more sense to write a standalone Spark program. We will return to this in the section on Spark Streaming below, where you will actually write a standalone Spark Streaming job. We aren\u2019t going to cover how to structure, build, and run standalone Spark jobs here, but before we move on, we list here a few resources about standalone Spark jobs for you to come back and explore later.\n", | |
| "\n", | |
| "First, on the AMI for this tutorial we have included \u201ctemplate\u201d projects for Scala and Java standalone programs for both Spark and Spark streaming. The Spark ones can be found in the /root/scala-app-template and /root/java-app-template directories (we will discuss the Streaming ones later). Feel free to browse through the contents of those directories. You can also find examples of building and running Spark standalone jobs in Java and in Scala as part of the Spark Quick Start Guide. For even more details, see Matei Zaharia\u2019s slides and talk video about Standalone Spark jobs at the first AMP Camp." | |
| ] | |
| } | |
| ], | |
| "metadata": {} | |
| } | |
| ] | |
| } |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment