Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save AashishTiwari/cbfd7a26d41ce554547fc02239425e1c to your computer and use it in GitHub Desktop.
Save AashishTiwari/cbfd7a26d41ce554547fc02239425e1c to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Classifying wine dataset using pipelines\n",
"\n",
"### Notebook by [Aashish K Tiwari](https://gist.github.com/AashishTiwari)\n",
"#### You can see all my public gists @ https://gist.github.com/AashishTiwari\n",
"\n",
"#### [Persistent Systems Ltd]\n",
"#### Data Source: UCI ML Repository"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Table of contents\n",
"\n",
"\n",
"1. [Step 1: Analyzing Data](#Step-1:-Analyzing-data)\n",
"\n",
"2. [Step 2: Applying Classification Techniques](#Step-2:-Applying-Classification-Techniques)\n",
"\n",
"3. [Step 3: Standardization](#Step-3:-Standardization)\n",
"\n",
"4. [Step 4: Using Pipelines](#Step-4:-Using-Pipelines)\n",
"\n",
"5. [Step 5: Conclusion](#Step-5:-Conclusion)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## libraries\n",
"\n",
"[[ go back to the top ]](#Table-of-contents)\n",
"\n",
"\n",
"* **NumPy**: >= V 1.11.1\n",
"* **pandas**: >= V 0.18.1\n",
"* **scikit-learn**: >= V 0.17.1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 1: Analyzing Data\n",
"\n",
"[[ go back to the top ]](#Table-of-contents)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"About Wine Dataset:\n",
"These data are the results of a chemical analysis of wines grown in the same region in Italy but derived from three different cultivars. The analysis determined the quantities of 13 constituents found in each of the three types of wines."
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>0</th>\n",
" <th>1</th>\n",
" <th>2</th>\n",
" <th>3</th>\n",
" <th>4</th>\n",
" <th>5</th>\n",
" <th>6</th>\n",
" <th>7</th>\n",
" <th>8</th>\n",
" <th>9</th>\n",
" <th>10</th>\n",
" <th>11</th>\n",
" <th>12</th>\n",
" <th>13</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>1</td>\n",
" <td>14.23</td>\n",
" <td>1.71</td>\n",
" <td>2.43</td>\n",
" <td>15.6</td>\n",
" <td>127</td>\n",
" <td>2.80</td>\n",
" <td>3.06</td>\n",
" <td>0.28</td>\n",
" <td>2.29</td>\n",
" <td>5.64</td>\n",
" <td>1.04</td>\n",
" <td>3.92</td>\n",
" <td>1065</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>1</td>\n",
" <td>13.20</td>\n",
" <td>1.78</td>\n",
" <td>2.14</td>\n",
" <td>11.2</td>\n",
" <td>100</td>\n",
" <td>2.65</td>\n",
" <td>2.76</td>\n",
" <td>0.26</td>\n",
" <td>1.28</td>\n",
" <td>4.38</td>\n",
" <td>1.05</td>\n",
" <td>3.40</td>\n",
" <td>1050</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>1</td>\n",
" <td>13.16</td>\n",
" <td>2.36</td>\n",
" <td>2.67</td>\n",
" <td>18.6</td>\n",
" <td>101</td>\n",
" <td>2.80</td>\n",
" <td>3.24</td>\n",
" <td>0.30</td>\n",
" <td>2.81</td>\n",
" <td>5.68</td>\n",
" <td>1.03</td>\n",
" <td>3.17</td>\n",
" <td>1185</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>1</td>\n",
" <td>14.37</td>\n",
" <td>1.95</td>\n",
" <td>2.50</td>\n",
" <td>16.8</td>\n",
" <td>113</td>\n",
" <td>3.85</td>\n",
" <td>3.49</td>\n",
" <td>0.24</td>\n",
" <td>2.18</td>\n",
" <td>7.80</td>\n",
" <td>0.86</td>\n",
" <td>3.45</td>\n",
" <td>1480</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>1</td>\n",
" <td>13.24</td>\n",
" <td>2.59</td>\n",
" <td>2.87</td>\n",
" <td>21.0</td>\n",
" <td>118</td>\n",
" <td>2.80</td>\n",
" <td>2.69</td>\n",
" <td>0.39</td>\n",
" <td>1.82</td>\n",
" <td>4.32</td>\n",
" <td>1.04</td>\n",
" <td>2.93</td>\n",
" <td>735</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" 0 1 2 3 4 5 6 7 8 9 10 11 12 \\\n",
"0 1 14.23 1.71 2.43 15.6 127 2.80 3.06 0.28 2.29 5.64 1.04 3.92 \n",
"1 1 13.20 1.78 2.14 11.2 100 2.65 2.76 0.26 1.28 4.38 1.05 3.40 \n",
"2 1 13.16 2.36 2.67 18.6 101 2.80 3.24 0.30 2.81 5.68 1.03 3.17 \n",
"3 1 14.37 1.95 2.50 16.8 113 3.85 3.49 0.24 2.18 7.80 0.86 3.45 \n",
"4 1 13.24 2.59 2.87 21.0 118 2.80 2.69 0.39 1.82 4.32 1.04 2.93 \n",
"\n",
" 13 \n",
"0 1065 \n",
"1 1050 \n",
"2 1185 \n",
"3 1480 \n",
"4 735 "
]
},
"execution_count": 28,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import pandas as pd\n",
"df = pd.read_csv('./wine.data', header=None)\n",
"df.head()"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<class 'pandas.core.frame.DataFrame'>\n",
"RangeIndex: 178 entries, 0 to 177\n",
"Data columns (total 14 columns):\n",
"0 178 non-null int64\n",
"1 178 non-null float64\n",
"2 178 non-null float64\n",
"3 178 non-null float64\n",
"4 178 non-null float64\n",
"5 178 non-null int64\n",
"6 178 non-null float64\n",
"7 178 non-null float64\n",
"8 178 non-null float64\n",
"9 178 non-null float64\n",
"10 178 non-null float64\n",
"11 178 non-null float64\n",
"12 178 non-null float64\n",
"13 178 non-null int64\n",
"dtypes: float64(11), int64(3)\n",
"memory usage: 19.5 KB\n"
]
}
],
"source": [
"df.info()"
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>0</th>\n",
" <th>1</th>\n",
" <th>2</th>\n",
" <th>3</th>\n",
" <th>4</th>\n",
" <th>5</th>\n",
" <th>6</th>\n",
" <th>7</th>\n",
" <th>8</th>\n",
" <th>9</th>\n",
" <th>10</th>\n",
" <th>11</th>\n",
" <th>12</th>\n",
" <th>13</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>count</th>\n",
" <td>178.000000</td>\n",
" <td>178.000000</td>\n",
" <td>178.000000</td>\n",
" <td>178.000000</td>\n",
" <td>178.000000</td>\n",
" <td>178.000000</td>\n",
" <td>178.000000</td>\n",
" <td>178.000000</td>\n",
" <td>178.000000</td>\n",
" <td>178.000000</td>\n",
" <td>178.000000</td>\n",
" <td>178.000000</td>\n",
" <td>178.000000</td>\n",
" <td>178.000000</td>\n",
" </tr>\n",
" <tr>\n",
" <th>mean</th>\n",
" <td>1.938202</td>\n",
" <td>13.000618</td>\n",
" <td>2.336348</td>\n",
" <td>2.366517</td>\n",
" <td>19.494944</td>\n",
" <td>99.741573</td>\n",
" <td>2.295112</td>\n",
" <td>2.029270</td>\n",
" <td>0.361854</td>\n",
" <td>1.590899</td>\n",
" <td>5.058090</td>\n",
" <td>0.957449</td>\n",
" <td>2.611685</td>\n",
" <td>746.893258</td>\n",
" </tr>\n",
" <tr>\n",
" <th>std</th>\n",
" <td>0.775035</td>\n",
" <td>0.811827</td>\n",
" <td>1.117146</td>\n",
" <td>0.274344</td>\n",
" <td>3.339564</td>\n",
" <td>14.282484</td>\n",
" <td>0.625851</td>\n",
" <td>0.998859</td>\n",
" <td>0.124453</td>\n",
" <td>0.572359</td>\n",
" <td>2.318286</td>\n",
" <td>0.228572</td>\n",
" <td>0.709990</td>\n",
" <td>314.907474</td>\n",
" </tr>\n",
" <tr>\n",
" <th>min</th>\n",
" <td>1.000000</td>\n",
" <td>11.030000</td>\n",
" <td>0.740000</td>\n",
" <td>1.360000</td>\n",
" <td>10.600000</td>\n",
" <td>70.000000</td>\n",
" <td>0.980000</td>\n",
" <td>0.340000</td>\n",
" <td>0.130000</td>\n",
" <td>0.410000</td>\n",
" <td>1.280000</td>\n",
" <td>0.480000</td>\n",
" <td>1.270000</td>\n",
" <td>278.000000</td>\n",
" </tr>\n",
" <tr>\n",
" <th>25%</th>\n",
" <td>1.000000</td>\n",
" <td>12.362500</td>\n",
" <td>1.602500</td>\n",
" <td>2.210000</td>\n",
" <td>17.200000</td>\n",
" <td>88.000000</td>\n",
" <td>1.742500</td>\n",
" <td>1.205000</td>\n",
" <td>0.270000</td>\n",
" <td>1.250000</td>\n",
" <td>3.220000</td>\n",
" <td>0.782500</td>\n",
" <td>1.937500</td>\n",
" <td>500.500000</td>\n",
" </tr>\n",
" <tr>\n",
" <th>50%</th>\n",
" <td>2.000000</td>\n",
" <td>13.050000</td>\n",
" <td>1.865000</td>\n",
" <td>2.360000</td>\n",
" <td>19.500000</td>\n",
" <td>98.000000</td>\n",
" <td>2.355000</td>\n",
" <td>2.135000</td>\n",
" <td>0.340000</td>\n",
" <td>1.555000</td>\n",
" <td>4.690000</td>\n",
" <td>0.965000</td>\n",
" <td>2.780000</td>\n",
" <td>673.500000</td>\n",
" </tr>\n",
" <tr>\n",
" <th>75%</th>\n",
" <td>3.000000</td>\n",
" <td>13.677500</td>\n",
" <td>3.082500</td>\n",
" <td>2.557500</td>\n",
" <td>21.500000</td>\n",
" <td>107.000000</td>\n",
" <td>2.800000</td>\n",
" <td>2.875000</td>\n",
" <td>0.437500</td>\n",
" <td>1.950000</td>\n",
" <td>6.200000</td>\n",
" <td>1.120000</td>\n",
" <td>3.170000</td>\n",
" <td>985.000000</td>\n",
" </tr>\n",
" <tr>\n",
" <th>max</th>\n",
" <td>3.000000</td>\n",
" <td>14.830000</td>\n",
" <td>5.800000</td>\n",
" <td>3.230000</td>\n",
" <td>30.000000</td>\n",
" <td>162.000000</td>\n",
" <td>3.880000</td>\n",
" <td>5.080000</td>\n",
" <td>0.660000</td>\n",
" <td>3.580000</td>\n",
" <td>13.000000</td>\n",
" <td>1.710000</td>\n",
" <td>4.000000</td>\n",
" <td>1680.000000</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" 0 1 2 3 4 5 \\\n",
"count 178.000000 178.000000 178.000000 178.000000 178.000000 178.000000 \n",
"mean 1.938202 13.000618 2.336348 2.366517 19.494944 99.741573 \n",
"std 0.775035 0.811827 1.117146 0.274344 3.339564 14.282484 \n",
"min 1.000000 11.030000 0.740000 1.360000 10.600000 70.000000 \n",
"25% 1.000000 12.362500 1.602500 2.210000 17.200000 88.000000 \n",
"50% 2.000000 13.050000 1.865000 2.360000 19.500000 98.000000 \n",
"75% 3.000000 13.677500 3.082500 2.557500 21.500000 107.000000 \n",
"max 3.000000 14.830000 5.800000 3.230000 30.000000 162.000000 \n",
"\n",
" 6 7 8 9 10 11 \\\n",
"count 178.000000 178.000000 178.000000 178.000000 178.000000 178.000000 \n",
"mean 2.295112 2.029270 0.361854 1.590899 5.058090 0.957449 \n",
"std 0.625851 0.998859 0.124453 0.572359 2.318286 0.228572 \n",
"min 0.980000 0.340000 0.130000 0.410000 1.280000 0.480000 \n",
"25% 1.742500 1.205000 0.270000 1.250000 3.220000 0.782500 \n",
"50% 2.355000 2.135000 0.340000 1.555000 4.690000 0.965000 \n",
"75% 2.800000 2.875000 0.437500 1.950000 6.200000 1.120000 \n",
"max 3.880000 5.080000 0.660000 3.580000 13.000000 1.710000 \n",
"\n",
" 12 13 \n",
"count 178.000000 178.000000 \n",
"mean 2.611685 746.893258 \n",
"std 0.709990 314.907474 \n",
"min 1.270000 278.000000 \n",
"25% 1.937500 500.500000 \n",
"50% 2.780000 673.500000 \n",
"75% 3.170000 985.000000 \n",
"max 4.000000 1680.000000 "
]
},
"execution_count": 30,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"df.describe()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"From above we can see that data is not standardized, Eg: Means for some features are in range of 746 OR 99 and some are in range of 1.5"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 2: Applying Classification Techniques\n",
"\n",
"[[ go back to the top ]](#Table-of-contents)"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"##### First lets blindly apply our classification techniques"
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from sklearn.cross_validation import train_test_split\n",
"\n",
"X = df.values[:,1:]\n",
"y = df.values[:,0]\n",
"\n",
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=12345)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### Predicting using Naive Bayes"
]
},
{
"cell_type": "code",
"execution_count": 32,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"GaussianNB()"
]
},
"execution_count": 32,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from sklearn.naive_bayes import GaussianNB\n",
"model = GaussianNB()\n",
"model.fit(X_train, y_train)"
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"0.98148148148148151"
]
},
"execution_count": 33,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
" model.score(X_test, y_test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Predict using KNN"
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"0.62962962962962965"
]
},
"execution_count": 34,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from sklearn.neighbors import KNeighborsClassifier\n",
"knn_classifier = KNeighborsClassifier()\n",
"knn_classifier.fit(X_train,y_train)\n",
"knn_classifier.score(X_test, y_test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here we can see by blindly applying distance based algorithms such as KNN how drastically accuracy has dropped.\n",
"Same dataset on Naive Bayes gives 98.14% accuracy but with KNN it gives 63% accuracy."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 3: Standardization\n",
"\n",
"[[ go back to the top ]](#Table-of-contents)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### We use standardization\n",
"Please also refer this excellent article :\n",
"http://sebastianraschka.com/Articles/2014_about_feature_scaling.html"
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from sklearn import preprocessing\n",
"\n",
"std_scale = preprocessing.StandardScaler().fit(X_train)\n",
"X_train_std = std_scale.transform(X_train)\n",
"X_test_std = std_scale.transform(X_test)"
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"1.0"
]
},
"execution_count": 36,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from sklearn.neighbors import KNeighborsClassifier\n",
"knn_classifier = KNeighborsClassifier()\n",
"knn_classifier.fit(X_train_std,y_train)\n",
"knn_classifier.score(X_test_std, y_test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"By standardazing the features we have improved the accuracy of KNN algorithm lets now work by reducing dimensions to 2"
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"from sklearn.decomposition import PCA\n",
"\n",
"pca_original = PCA(n_components=2).fit(X_train)\n",
"X_train = pca_original.transform(X_train)\n",
"X_test = pca_original.transform(X_test)\n",
"\n",
"pca_standard = PCA(n_components=2).fit(X_train_std)\n",
"X_train_std = pca_standard.transform(X_train_std)\n",
"X_test_std = pca_standard.transform(X_test_std)"
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"0.59259259259259256"
]
},
"execution_count": 38,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from sklearn.neighbors import KNeighborsClassifier\n",
"knn_classifier = KNeighborsClassifier()\n",
"knn_classifier.fit(X_train,y_train)\n",
"knn_classifier.score(X_test, y_test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"With number of components brought down to 2 on non-standardized data we see the drop in accuracy levels"
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"0.98148148148148151"
]
},
"execution_count": 39,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from sklearn.neighbors import KNeighborsClassifier\n",
"knn_classifier = KNeighborsClassifier()\n",
"knn_classifier.fit(X_train_std,y_train)\n",
"knn_classifier.score(X_test_std, y_test)"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"But on Standardizing again we are getting good accuracy levels"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 4: Using Pipelines\n",
"\n",
"[[ go back to the top ]](#Table-of-contents)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Applying Pipeline to club all the steps in 1 command:"
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from sklearn.preprocessing import StandardScaler\n",
"from sklearn.decomposition import PCA\n",
"from sklearn.neighbors import KNeighborsClassifier\n",
"from sklearn.pipeline import Pipeline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Apply the pipeline with PCA."
]
},
{
"cell_type": "code",
"execution_count": 41,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"Pipeline(steps=[('scl', StandardScaler(copy=True, with_mean=True, with_std=True)), ('pca', PCA(copy=True, n_components=2, whiten=False)), ('clf', KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',\n",
" metric_params=None, n_jobs=1, n_neighbors=5, p=2,\n",
" weights='uniform'))])"
]
},
"execution_count": 41,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from sklearn.cross_validation import train_test_split\n",
"\n",
"X = df.values[:,1:]\n",
"y = df.values[:,0]\n",
"\n",
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=12345)\n",
"\n",
"pipe_knn = Pipeline([('scl', StandardScaler()),\n",
" ('pca', PCA(n_components=2)),\n",
" ('clf', KNeighborsClassifier())])\n",
"pipe_knn.fit(X_train, y_train)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now test the scores"
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Test Accuracy: 0.981\n"
]
}
],
"source": [
"print('Test Accuracy: %.3f' % pipe_knn.score(X_test, y_test))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Finding Best Params using Grid Search"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"#TODO"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 5: Conclusion\n",
"\n",
"[[ go back to the top ]](#Table-of-contents)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Non Standardized data can affect the algorithm accuracy for some distance based algorithms such as KNN.Using standardization and dimensionality reduction we can improve accuracy levels.\n",
"Also the concept of pipelines in SciKit package helps us do the manual steps using just one command!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 2",
"language": "python",
"name": "python2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment