Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save avonmoll/0a0fa7b5bcfdc9578da52c64b417b5d0 to your computer and use it in GitHub Desktop.
Save avonmoll/0a0fa7b5bcfdc9578da52c64b417b5d0 to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Hierarchical Bayesian Movie Ratings\n",
"\n",
"An attempt at porting the work from [@BradLee77](https://courses.edx.org/u/BradLee77) shown in [this notebook](https://gist.github.com/anonymous/a39e32133ec493840c3e3fc86e2b0a3d) [1] from using PyMC to using bayespy.\n",
"\n",
"> This is a Bayesian hierarchical (pooling) model for movie ratings. Each movie can be rated on a scale from zero to five. Each movie is rated several times. We want to find a smoothed distribution of ratings for each movie.\n",
"> \n",
"> We are going to learn a top-level prior distribution (hyperprior) on movie ratings from the data. Each movie will then have its own prior that is smoothed by this top-level prior. Another way of thinking about this is that the prior for ratings for each movie will be shrunk towards the group-level, or pooled, distribution.\n",
"> \n",
"> If a movie has an atypical rating distribution, this approach will shrink the ratings to something more in-line with what is expected. Furthermore, this learned prior can be useful to bootstrap movies with few ratings to allow them to be meaningfully compared to movies with many ratings.\n",
"> \n",
"> The model is as follows:\n",
"> \n",
"> $\\gamma_{k=1...K} \\sim Gamma(\\alpha, \\beta)$\n",
"> \n",
"> $\\theta_{m=1...M} \\sim Dirichlet_M(c\\gamma_1, ..., c\\gamma_K)$\n",
"> \n",
"> $z_{m=1...M,n=1...N_m} \\sim Categorical_M(\\theta_m)$\n",
"> \n",
"> where:\n",
"> \n",
"> * $K$ number of movie rating levels (e.g. $K = 6$ implies ratings 0, ..., 5)\n",
"> * $M$ number of movies being rated\n",
"> * $N_m$ number of ratings for movie $m$ \n",
"> * $\\alpha = 1 / K$ in order to make the collection of gamma r.v.s [act as an exponential> coefficient](http://tdunning.blogspot.com/2010/04/sampling-dirichlet-distribution-revised.html)\n",
"> * $\\beta$ rate parameter for the exponential top-level prior\n",
"> * $c$ concentration parameter dictating the strength of the top-level prior\n",
"> * $\\gamma_k$ top-level prior for rating level $k$\n",
"> * $\\theta_m$ movie-level prior for rating levels (multivariate with dimension = $K$)\n",
"> * $z_{mn}$ rating $n$ for movie $m$"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/main/anaconda2/envs/python3/lib/python3.5/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.\n",
" warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')\n",
"/home/main/anaconda2/envs/python3/lib/python3.5/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.\n",
" warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')\n"
]
}
],
"source": [
"import numpy as np\n",
"import bayespy as bp\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"# this array stores the counts of each rating for each\n",
"# movie where each row corresponds to a movie and each\n",
"# column corresponds to counts for a rating at that level\n",
"data = np.array([\n",
" [ 0, 0, 0, 0, 0, 2 ], # Movie 1 with 2 5-star ratings\n",
" [ 0, 0, 0, 0, 1, 4 ], # Movie 2 with 1 4-star rating and 4 5-star ratings\n",
" [ 0, 1, 1, 2, 4, 2 ], # ...\n",
" [ 0, 3, 3, 6, 12, 6 ],\n",
" [ 0, 0, 2, 4, 5, 1 ],\n",
" [ 0, 1, 3, 3, 2, 1 ],\n",
" [ 1, 0, 4, 4, 1, 2 ],\n",
"])\n",
"\n",
"pymc_inferred = np.array([\n",
" [0.02144172, 0.05540504, 0.12788657, 0.16527906, 0.18168197, 0.44830564],\n",
" [ 0.01063829, 0.04511137, 0.09015204, 0.11610496, 0.22394176, 0.51405158],\n",
" [ 0.00738755, 0.09079506, 0.12985281, 0.20999552, 0.3488494, 0.21311968],\n",
" [ 0.00366227, 0.09739613, 0.10971591, 0.2088047, 0.36784195, 0.21257903],\n",
" # the following were not output in [1] and I cannot get pyMC3, unfortunately\n",
" [0.0001]*6,\n",
" [0.0001]*6,\n",
" [0.0001]*6\n",
" ])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Model with Fixed Prior Concentration $\\alpha$"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"(numberOfMovies, numberOfRatingLevels) = data.shape\n",
"\n",
"theta = bp.nodes.Dirichlet(np.ones(numberOfRatingLevels),\n",
" plates=(numberOfMovies,),\n",
" name='\\theta')\n",
"z = bp.nodes.Multinomial(np.sum(data, axis=1),\n",
" theta,\n",
" name='z')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Perform Inference\n",
"(Variational Message Passing)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Iteration 1: loglike=-5.473987e+01 (0.001 seconds)\n",
"Iteration 2: loglike=-5.473987e+01 (0.001 seconds)\n",
"Converged at iteration 2.\n"
]
}
],
"source": [
"z.observe(data)\n",
"theta.initialize_from_random()\n",
"\n",
"Q = bp.inference.VB(z,theta)\n",
"Q.update(repeat=1000)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def hinton(matrix, max_weight=None, ax=None):\n",
" \"\"\"Draw Hinton diagram for visualizing a weight matrix.\"\"\"\n",
" ax = ax if ax is not None else plt.gca()\n",
"\n",
" if not max_weight:\n",
" max_weight = 1\n",
"\n",
" ax.patch.set_facecolor('gray')\n",
" ax.set_aspect('equal', 'box')\n",
" ax.xaxis.set_major_locator(plt.NullLocator())\n",
" ax.yaxis.set_major_locator(plt.NullLocator())\n",
" \n",
" for (x, y), w in np.ndenumerate(matrix):\n",
" color = 'white' if w > 0 else 'black'\n",
" size = np.sqrt(np.abs(w) / max_weight)\n",
" rect = plt.Rectangle([x - size / 2, y - size / 2], size, size,\n",
" facecolor=color, edgecolor=color)\n",
" ax.add_patch(rect)\n",
"\n",
" ax.autoscale_view()\n",
" ax.invert_yaxis()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Hinton Diagrams of $\\theta$"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAA8oAAAGGCAYAAACjangKAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzt3XucbGdZJ/rfA+kY222rECRekMvgdWTQoFE5KIgi6CiI\nQRAIg7fDENmDlzN6GBVBZ8iox4OONDIq41EcbiIZQVGi4sAcB1QkQUBuAobgcECCYrktNumQ9/yx\nqsNKp/fe3dVVtaqqv9/Ppz9796pVq95VVWv1et7ned9VrbUAAAAAndsM3QAAAABYJgJlAAAA6BEo\nAwAAQI9AGQAAAHoEygAAANAjUAYAAIAegTIAAAD0CJQBAACgR6AMAAAAPQJlAAAA6BEoAwAAQI9A\nmYWrqk+pqp+uqh+tqp8cuj0Ah1VVT62qm6rqdkO3BQCYvfOGbgDHS1VtJnlxkoe31q6vql+vqi9v\nrf3J0G0DOIQ2+QEA1pCMMov2M0le3lq7fvL7Pya574DtAQAAuAUZZRamqj4nyaOT3Km3+E7pgmUA\nAIClIKPMIj0+yVWttVGSVFUluSTJPwzaKoDp3aGqfqOq/qGqrq+qn6uqj9t9sKq+o6peUVXvr6rT\nVfWXVfX4/gaq6ler6gNVddu9G6+q36+qt+xZdllV/XlVjavqg1X1/Kr6zD3r3L2qXlxV/19Vfbiq\n3jNZ7xN769xUVT9fVY+qqrdO1vvzqvrK3jr3m6z3kH3a9qjJY1823VsHML3J+eemqvqlMzz+tMnj\nH91vPonJ+e3KyXnyI5Pz9Eur6qG9dS6oqv9SVW+sqg9V1T9W1eur6olVJeG45qo1Q6xYjKp6T5K/\nTvKOyaLbJfmmJA9rrf23wRoGcEhV9ZQkT0nyxnTntauSfHmSxyR5Tmvt2yfr/WmSNyX5iyQ3pjvn\nPTDJE1prz5qs8zVJfj/JN7XWfrf3GndM8jdJntJau2Ky7EeS/ESSFyT5H0nukOSJ6Spzvri1Nqqq\njSRvS7KR5BeSvC/JZyT5xiTf2lp7z2RbN03adsckP5/kI0m+J8mnJrmktfbmyXrvTvKnrbWH73kP\nXpbks1trn3PEtxPg0CbnsA8nOZ3kjq21G/c8/s4kFyW5IMkdWmt/13vsx5M8Ocnbkzw/ybuT3D7J\nNyS5X5JHt9ZeUFWfkuRl6c631ya5Kcm9053rn99au2yOu8jQWmt+/Mz9J8k/S/LRdBdVu8u+J8lO\nkgsnv39Fkt9J8qp9nv/AJH+X5PIkJybL7pvktUl+K8l3Db2Pfvz4OT4/6YLkm5JcuWf59uRc94WT\n3z9un+f+XpK/6v1eSa5L8rw9631/uuD6zpPfP2tyzvw/96z3BUluSPKkye/3nLTtoefYh5smbf2i\n3rI7JRkn+c3esqdNln1ib9mFk9d88tCfhR8/fo7nz+Qc9uLJefGb9jx278njvzE5z92u99jDJo+9\nIMlt99nuA5J8wzle++cn2/3Uod8HP/P7UXrNotw9yai19le9Zf8yySvbZGKv1tpr0gW9n9F/4qRc\n5pIkv9dae1Zr7dRk/VelG2f/H1pr/2UB+wDQ15I8c8+yZ6QLfL8hSVprH9l9oKq2qur26TITd9st\ng26ttSTPTfLgqvqE3rYeleTVrbV3T36/dLLtF1XV7Xd/kvxtkr9K8tWT9XaHszyoqj7+HPvw6tba\n62/eoS7b/JIkD5wMj0mS56TLyDys97xvS3LbSbsBzql3W73P3WfIyvmTdV5ZVa8/w/PfVlW/t2fx\n/0p3Tn3UnuWPSvKGJH+5z6b+fZIPpkuyfHTvg621P2i96p4z2D0vf/I51mOFCZRZlNslec/uL1X1\nqel67H6ut+z8dFmLD1fVBb3nfmmST0jyyv4Gq+qTk9wlydXzajTAObxjz+/vTJepuEuSVNX/VlV/\nWFWnknwoyQfSZWiT5JN6z3tOks0kD50873OT3GuyfNfd0/3dfsdkO7s/f5vk89KVTKe1dm2S/zvJ\ndye5vqpeXlXfU1VbB2h/0pUibqYr605r7W3pqnce3VvnUUn+pLX2rn2eD7Cf3fGev5Hk/CRPSlfW\n/MQku+OMfz3JParqC/pPrKovTfLZk8f3en6Sb5rcgjST+R6+Ncnz9q5YVXdP8rlJ/ltr7Z8O2vCq\n2ph0Tn7mZAzz/5GuFHu/cyhrQqDMolyf7iJx17cn+X9bay/rLfuyJH+abrzf3ZKkqr44yTVJvirJ\nq/Zs8yuTvKa1dtOc2gxwWDdP/FFVd0vyh+k6Cr8/XZb5a5P87GSVm/8Gt9bekuR1SXbHu12Wbszw\ni3rbvk26IPzrJtvp/zwgyb/ube8Hk/yLdEH5BenKBN9UVZ8+5X49J8l9q+rTq+qfpRuPvd8FK8C5\nvLO19s2TKsHHpptL4TFV9YXpznkfycfOhbsuS3IqyX5z2vxmugrDb578/sB0442fv8+6nz/5902H\nbPO3pOuYvC5dufd7kjzYNeh6EyizKNekO2ntZpP/VbqJEPru3lp7Z5J3pStL/LgkW+lOjJ/RWnv7\nnvXvmz3Bc1X5TgOL9Nl7ft/N+l6bbuKu89ONnfvl1trLW2t/lG7imf08J8n9q+qiJI9M8rLWWv+u\nAO9MV3p9bWvtj/b5+bP+xlprf9lau6K1dr8k90nymenuPnC29iddtmWc7qJw1wvSBemPTJdNviFd\nVgjgMM46ZKV1d0Z5SbpzTZKbr+0eni4L/OFbbbC1DyV5ee85u8NW3rN33XTXlcnhb036R+k6JR+W\n5FnpxkWfOOQ2WDGCChZiMg752VX19HQZjge31t67Z7Xd8XDvTDf51/0m45DvneQ1+2z2FoHyZCzz\ng2bddoAzqCRP2LPsiekuBH8v3UQvSe9vbVV9UrqKmv3sZj/+U5K75tYZ2yvTBatP2bcxk9ufVNUn\n7nOrqb+cPPfj9iz/iknlzu427pTkwelu5Xdzdry19sHJPj0mXQn2y1tvBlmAQzjrkJV0nYafVVX3\nmfz+gHRDS85WxfK8JA+YnMMekjPPnzCa/PuJZ3h8X621D0w6JK9srT0hXcn4H0ySP6wp9/9iYVpr\nP3umxybjk2+Y/PqudDNi/87k9/0yx5+U5HPSjZvb9cQk/3FW7QU4gLtW1UvSZTPunS6I/K+ttTdW\n1UfSZR1+p6p+Md2F2XcneX+6W5bcQmvt+qp6ebqxdX+f5Hf3PP6uqvrRJFdU1V3TTX74j+mGqnxz\nkl9M8vQk90+yXVUvSjfe+Lx0VTw3pisZ7HtTkpdX1TPSnYMvTxfoP3WffX1OuhLHluRHD/oGAZzD\n3nvVXpVu7oXLkvzx5N/3JXnFWbbx0nTnsF9LV8nzojOs99bJv/eYtrETv5ku8fOQJL98xG2xpGSU\nGdwkm/GCdD2Bn5pu9tY/a629s6oel2521bvt9tpNJnT4yXS3i3psVT2hqn4ryWf2Z5gFmLObkjwi\n3Xi6/5jk69ONBf7uJJkMF7l0st7/leRxSf7zZJ0z2Z2864WttZ29D7bWfmqyzY8m+bHJdr8xXaD+\n0slqfzH5/RvTTer1lHRZlAftLc9O1wn5fekyxU9NN5/Eg1pr+43f++10Afw/9F4L4LDONmQlk3G/\nz0vysMnErQ9Jd/u8vQH1zVprp9N1Ht43ye+fqeJlcveVtyV5yO7kX1PavaPAJ511LVZaneU7BwAs\nUFU9ON1kNV/ZWnv1nF/rpiTbrbUnHnD92yZ5b5KXtNYeN8+2Aeunqp6SruPuJa21h/aWPzPd/Alf\n1Fp742TZF6W7q8mL0o0Lvlf/VnaTdW5xDquqf5Guuuaq1tqf9l7zx5LcYTd4rqqHp0vQvDDJZXtv\nEVVVD0hyfmvtZVV1+8nQk7378ox01Y9f01p75dHeGZaV0msAWB6PS/KueQfJU3pokgtzy1tWARzW\nGYes7K7QWnt9Vb0p3VCUN+8NkvfTWntDunsnn2u936iqeyT54SRfXFXPT3df5Nunm+vm/vnYfZkv\nq6rHp8tWvyvdEJoHppvY66WC5PUmUAaAgVXVt6W7ndPXp5tvYWlU1SVJ7pluXPLVrbU/HrhJwOpq\n6Yas/Pt0Q1ZuTDcc5Yf2Wfc5SX46Z+6ca7n1+OaDNaK1J1fVK9Kdbx+f7jZ+H0o3981DW2u7w0v+\nOMlXpBsGeMdJe9+W7pZ/29O8NqtjsEC5qm6frkfm2pz5VhkAZ3JBuhkyr9qvLApWzPPSTcz17HS3\nHlmEg15kXp4u43NNku+Ya4uA4+ADrbWHH2C9nXRzPDxvvwdba3tn999vnR9P8uNneOyVSV55jue/\nLl2QzDE0ZEb5gTnz1O0AB/XonOGPKKyK1trCJ9c8yEXmZL3viAAZWLzvTPLK1trfDN0QjqchA+Vr\nk+RbvuVbcuGFFw7YDDieLrzwwlx66aVDN2Nqb3nLW3LZZZclk3PJOlFxA8yAqhtWzmQm6ock+eok\nX5juvu4wiCED5dNJd7H+6Z/+6QM2A46niy66KBdffPHQzZiFdQwkVdwAs6LqhlVyh3R///4+ydNa\nay8buD0cYybzAlg+1yYqbmCVDV2188xnPjO/8iu/kqxh1Q2r6WzjhXvrvDvdPZVhcAJlgOWj4gZW\n3NBVOxdddNHuf9eq6sbQFGAGDjQ0RaAMAMCqMDQFmJWzDk0RKAMAsCquTQxNgVUz9HCUvoNOCCtQ\nBlhSF154Yb98cirj8Tij0WhGLTq8ra2tbG5uHmkb67APif2YlXX4TnEkhqbAChp6OMoZnHX4hkAZ\nYEldeumlR/6jsrOzk+3t7UGCgq2trZw8eTIbGxtH2s467ENiP2ZhHb5TAKwGs8oBrLGNjY2ZZBGn\nsbm5OZPAbB32IbEfs7AO3ykAVoOMMgAAwIqZ1ZCaPkNTPkagDAAAHBuzDjCHCC5nOaSmz9CUjxEo\nAwAAB7LqEwPOI8AcIric5ZCavt2hKQJlgTIAAHAA6zAx4DwCTMHlejKZFwAAcE7rMjEgHISMMgAA\nLNBhypdNrgTDECgDAMCCHLZ82eRKMAyBMgAAK+XCCy/MRRddtO9jy56BPWz5svGvMAyBMgAAK+XS\nSy/NxRdfvO9jMrDALJjMCwCAtWGSKGAWBMoAAADQI1AGAACAHoEyAAAA9AiUAQAAoEegDAAAAD0C\nZQAA1sbOzk7G4/HQzQBWnPsoA6yxIS8Yx+NxdnZ2srGxcaTtrMM+JPZjFtbhO8VsvPjFL85rX/va\nfR8bj8fuoQwcmUAZYEmd7ULwoIa8YByNRtne3j7y/UzXYR8S+zEL6/CdYjauv/76nH/++UM3A1hj\nAmWAJbUOF4Kj0WjlA5J12IfEfsCyOGxlhAoIGIZAGQAAFuSwlREqIGAYAmUAAFgglRGw/Mx6DQAA\nnNNu2fgsKCln2ckoAwDM2CxnGj+snZ2dnD59euGvy/pbh4kB53FsCvrXk0AZAGDGZhlQHNZ4PM6p\nU6cW/rocD6teNj6PY3OIoH9enXGC/o8RKAMAzMGqBxSwrtbh2JxXZ5zJ4z5GoAwAALBi1iHgX2Ym\n8wIAAIAegTIAAAD0CJQBAACgR6AMAADA3MzyHtxHdeONNx5oPZN5AQAAMDdD3jJvr7e//e0HWk+g\nDAAAwFwtyyzdB73PvNJrAAAA6BEoAwAAQI9AGQAAAHoEygAAANAjUAYAAIAegTIAAAD0CJQBAACg\nx32U4Zgaj8fZ2dnJxsbG0E0BWCtbW1vZ3NwctA033HDDoK8PsOoEynBMjUajbG9v33wxd+LEiVxw\nwQUDt+pgTp8+nTe/+c1DNwPgVra2tnLy5MnBOyGvvvrqXHHFFYO2AWCVCZThGBuNRhmNRtna2sp3\nfud3Dn5hd1A7Ozt50pOeNHQzAG5lc3NzZc6lAJyZMcrAyl3YbWxsrEz2GwCA1SOjDAAAwNwsw9wN\nuw46h4NAGQAAgLlYlrkbdh10DgeBMgCw9OaRjRiPxxmNRjPdJgC3tGpD/HYJlAGApTavbMTOzk62\nt7cFywDcism8AIClNq9sxMbGxtKMmQNgucgoAyypEydO5KKLLprquctUUnqUkln7MVvrsA8AsAgC\nZYAl9YhHPCKXXHLJVM9dlpLSo5bM2o/ZWYd9AKAzz1mkdYx2BMoAS+q886Y/Re+WlA79h+6oJbP2\nY3bWYR8AmP8s0jpGOwJlWLB59QDq/QMAWH/znkVax2hHoAwLNM8eQL1/AAAwGwJlWKB59gDq/QMA\nOLhZVPmp6FtfAmUAAODA1mEG/VlV+anoW18CZQAA4EDWZQb9WVX5qehbXwJlAABYoMNkZJclA7vL\nDPocFwJlAABYkMNmZJclAwvHzW2GbgAAABwXh83I7mZggcWSUQYAYOXsV768bGXKwOoSKAMAsFJO\nnDixb/myMmVgVpReAwCwUi644IJ9y5eVKQOzIlAGAACAHoEyAAAA9AiUAZbUjTfeOPVzd3Z2Mh6P\nZ9ia6YzH4+zs7Ez9fPsxO+uwDwCwKCbzAlhSL3zhC3PNNddM9dxlmfl1NBple3t76jGD9mN21mEf\nYNfp06ezs7Oz72ReOnSAWRAoAyypU6dO5X3ve9/QzTiy0Wi0FgHWOuzHOuwDJN35cb+OHx06wKwI\nlAGApbZbNr7fLMdHIfu42la14+ew32ffUxiGQBkAWGpHLRs/E9lHhnDY77PvKQxDoAwALL1VzR7C\nfnyfYfmZ9RoAADgQM+hzXMgoAwAAB7IuM+jPau4Dgf/6EigDAAAHtg6l47Oa+2BZAn9mT6AMADAj\n85qhG5i9dQj4mR+BMizQPC+glP4ADG9eM3Qf1nXXXTfo6wPzM+8OOdeUHYEyLNA8L6CU/gAsh2XI\nUl1//fWDvj4wP/PukHNN2REow4ItwwUUAACry/Xk/Lk9FAAAAPQIlAEAAKBHoAwAAAA9AmUAAADm\nYneW7lVjMi8AAADmYllum7froLfPEygDAAAwN8s0S/dBb5+n9BoAAAB6BMrAyo0d2dnZyenTp4du\nBgAAa0rpNbB0Y0fOZTwe59SpU0M3AwCANSVQBpIs19gRAAAYktJrAAAA6BEoAwAAQI9AGQAAAHoE\nygAAANBjMi8AYCltbW0tbDb+8XhsQkMAbiZQBgCWztbWVk6ePJmNjY2FvN7Ozk62t7cFywAkUXoN\nACyhzc3NhQXJSbKxsbEy95IHYP5klAFgjc2rfFmp8uH5LABWh0AZ5miR4+v2cuEEzLN8Wany4fgs\nAFaLQBnmZNHj6/Zy4cQymEVn0dCdPrPq8BpiP+ZZvrxbquwcczA+C4DVIlCGOVn0+Lq9XDgxtFl1\nFg3Z6TPLDi+dVwAcxRCVikN3Vg9JoAzAXMyqs2jITp9ZdnjpvAJgWkNVKh7nTl6BMivFRCjD8xks\n3mHfc+8lAKyXoSoVj3Mnr0CZlWEilOH5DBZvmvfcewkA+5tn+bKO6sObdzn5UT4TgTIrw0Qow/MZ\nLN4077n3EgBubd7lyzqqD2cR5eRH+UwEygAAwIG4E8CZ6ag+nEWUkx/lMxEoH8FRThTLUJpx1BPd\nMuwDAACL4U4AHCcC5Skd9UQx9MlhFie6ofcBAGCVrHqSxZ0AOE4EylM66oli6JPDLE50Q+8DAMCq\nWPUkCxw3txm6AQAAsO5mlWQBFkOgDAAAAD0CZQAAAOgxRhkAgJWzd2KsZZjsClgfAmUAAFbKiRMn\nbjUxlsmugFlSeg0AwEq54IILbjUxlsmugFkSKAMssfF4nJ2dnUM9Z2dnJ+PxeE4tAgBYf0qvAZbY\naDTK9vb2obIkxukBAByNQBlgyY1GI4EvAMACKb0GYC6mKRvfz5Cl5LPah0RJPMzS6dOnb3VsOsaA\nWZJRBmAupikb38+QpeSz2odESTzM0qlTp251bDrGgFkSKAMwN+tQNr7K+7CbEd87O/AsyN4djs9i\n9lb52ASWn0AZANbULDPie8neHY7PAmC1CJSndNSe4aF7f2fRsz30PgBwbrJuy8NnAbA6BMpTOmrP\n8NC9v7Po2R56HwBYX/MsVd6Pzl/mbdWTLHDcCJSPYNV7hle9/QCsr3mWKu9H5y/ztupJFjhuBMqs\nDBOhDM9nACySDl3Wzap/p2d5HeDvPstOoMzKMBHK8HwGAHB8uWUex4lAmZWy6j2x68BnAADH1ypf\nB8x77gNZ8sNZxFwUR/lMBMoAAMDam/fcB7Lkh7OIuSiO8pkIlAEAgGNhlTPi62iZP4/bDN0AWFe7\n5SRDUf4DALAehrquPM7XkzLKMCeLvrXJXsp/AADWw1DXlcf5elKgDHO0zOUkAACsDteVi6X0GgAA\nAHoEygAAANAjUAYAAIAegTIAAAD0CJQBAACgR6AMAAAAPQJlAAAA6BEoAwAAQM/UgXJVPaaq/mdV\nvbeq7jxZ9n1V9ZDZNQ8AAAAWa6pAuaouT/L0JL+b5JOT3Hby0IeSfN9smgYAAACLd96Uz/s3Sf73\n1tpvVdWTesv/PMnPHL1ZAMBxtbW1lc3NzYW/7ng8zmg0WvjrArB8pg2U75rkmn2WfyTJJ0zfHADg\nONva2srJkyezsbGx8Nfe2dnJ9va2YBmAqcco/3WSL9pn+YOSvGX65gAAx9nm5uYgQXKSbGxsDJLJ\nBmD5TJtRfnqSZ1bVBUkqySVV9cgk/y7Jd8+qcbDqhiofTJQQAgDAtKYKlFtrz66qDyf5D0k2kzwv\nyXuTfG9r7QUzbB+srCHLBxMlhEBnXh12OuMOz2cBsDqmzSintfbcJM+tqs0kJ1prfzu7ZsHqG7J8\nMPlYCaGLJzi+5tlhpzPucHwWAKtl6kB5V2ttnGQ8g7YAsGZmkUFbhmzZUfdjqH2YZ4edzrjD8VkA\nrJYDB8pVdXWSr2mt/X1VXZOknWnd1trFs2gcAKtrVhm0obNls9iPofcBgNVm3pvFO0xG+SXpbv+0\n+/8zBsowL8Z3Dc9nwEHNKoM2dLZsFvsx9D4AsLrMezOMAwfKrbUf7/3/qXNpDZyF8V3D8xkAACyW\neW+GMdUY5ap6dpL/2lp75WybA2dmfNfwfAbDOlc2X1YeAM5uniXM/g4f3rxLyo/ymUw7mdcdkry8\nqj6Q5AXpgua/mHJbK+soH+wyHEirOjkNHEcHyebLygPAmc27hNnf4cNZREn5UT6Tae+j/JCq+pQk\n35rkUUl+oKremuS5SZ7XWrt2mu2ukqN+sEMfSCangdVykGy+rDwAnNm8S5j9HT6cRZSUH+Uzuc20\nL9pa+/vW2i+11u6X5M5JfjXJY5K8Y9ptrpKjfrC7H9pQZjk5DQAAwDo58n2Uq2ojyZck+bIkd0ny\n/qNuEwAA1s2qD9tLZjOmdFn2Bc5m6kC5qr46Xdn1peky01cm+cYkfzSbpgEAwHpY9WF7yezGlC7D\nvsC5TDvr9f9KcrskL0/yuCS/3Vr7yNmfBQAAx9Oshu0NGVzOakzpMuwLnMu0GeWnJnlRa+1DM2wL\nAAAADG7aWa9/eff/VfWZk2V/M6tGAQAAwFCmmvW6qm5TVT9WVf+Q5N1J3l1VH6qqJ1fV1DNpAwAA\nwNCmLb1+WpLvSvKkJP9zsuw+6UqyL0jyI0duGQA3G4/H2dnZOevYsJ2dnYzH4wW2CmBY/RmYzaQM\nzNK0gfJjk3x3a+2lvWVvmEzy9QsRKAPM1Gg0yvb29llvyeEiEThO9s7AbCZlYJamDZRvl+St+yx/\n6+QxAGZsNBq5AASY2DsDs5mUgVmadjzxXyQ5uc/yk5PHAAAAYCVNm1H+oSQvq6qvTfKaybKvSPJZ\nSb5+Fg0DYLUdZFz1QQw99noW+zH0PsA62ntsOs6AWZr29lCvqqrPTXJ5ks+fLL4yyS+01t47q8YB\nsLoOMq76IIYeez2L/Rh6H2Ad7T02HWfALE2bUU6SDyZ5aZI/ycdKuL+kqrJnki8Ajql1GVe9LvsB\n68axCczLVIFyVT0oyXOS3D5J7Xm4JbntEdu19I5aijd0eZBSQoD1N6vy9/34G3A4PguA1TJtRvkZ\nSV6U5Cdaa++fYXtWxlFL8YYuD1JKCLD+ZlX+vh9/Aw7HZwGwWqYNlO+Y5OnHNUjeterlPqvefgDO\nzbl+efgsAFbHtLeH+s0k95thOwAAAGApTJtRPpnkRVX1lUnemGSn/2Br7eeP2jDYy/iu4fkMgHmb\n53nmXJyHmKdVn98mWZ/b/sFBTBsoPzLJ1yU5nS6z3HqPtSQCZWbO+K7h+QyAeZvneeZcnIeYp1Wf\n3yZZn9v+wUFMGyg/LclTkvxka+2mGbYHzsr4ruH5DIB5c55hXa3Dd3sd9gEOYtoxyucneaEgGQAA\nWAW7pePzoqT8cOb9eSRH+0ymzSj/WpJHJLliyufD2htynF3iZA0A0DfvoR1Kyg9nEUNtjvKZTBso\n3zbJD1XVA5O8IbeezOsHptwurI0hx9klTtYAAHspHV8uy/x5TBso3yPJNZP/f+Gex1qAJMt98AMA\nAPubKlBurX31rBsCAADALRnON4xpM8oAAADMmeF8wxAoAwAALDHD+RZv2ttDAQAAwFoSKAMAAECP\nQBkAAAB6BMoAAADQI1AGAACAHoEyAAAA9AiUAQAAoEegDAAAAD0CZQAAAOg5b+gGAADstbW1lc3N\nzYW+5ng8zmg0WuhrArCcBMoAsIYWFWjOI7jc2trKyZMns7GxMdPtnsvOzk62t7cFywAIlGGehsiI\n7JIZgeNrkYHmPILLzc3NhQfJSbKxsZHNzU3nTgAEyiy/RQebswowh8qI7JIZgeNrkYGm4BKAdSRQ\nZqkNEWzOKsAcKiOyy8UrHN2sO+pUegDAahAos9SGCDYFmEAyn446lR4AsBoEygCwj3l01OmIY9dR\nqxVUJwDMl0D5GFJKCADDmUW1guoEgPkSKB8zSglh9Ry2c2tZO68Osx/Lug8wC7OoVlCdADBfAuVj\nRikhrJZpOreWsfPqsPuxjPsAABwfAmXgSKYt5ZcxPJhpOreWsfPqsPuxjPsAABwfAmVgakcp5Zcx\nBABgWd1m6AYAq+sopfy7GUMAAFg2AmUAAADoESgDAABAj0AZAAAAegTKAAAA0CNQBgAAgB6BMgAA\nAPQIlAEAAKBHoAwAAAA9AmWAJTYej7Ozs3Oo5+zs7GQ8Hs+pRdM57H4s4z4AAMfHeUM3AIAzG41G\n2d7ezubDbmUZAAAOBUlEQVTm5oGfMx6PMxqN5tiqwzvsfizjPgDM04kTJ5Ikp06dGrglrBvfrekI\nlAGW3Gg0WougcV32A45qt8JiY2Nj6m2oulg/97rXvZIkr3rVqwZuCevGd2s6AmUAgAWaplJkL1UX\n6+d1r3vd0E1gTfluTUegDEztKFkR2RCW3Syyfnv53rNLhQV7KYtlXny3piNQBqZ2lKyIbAjLbhZZ\nv7187wFgNQiUgSORFWGd+X4DwPHk9lDHzDS3mjkXpYQAy2Ue5/oz8TcAgHUko3zMKCUEWH/zONef\nib8BAKwjgfIxpJQQYP2t8rl+HhOpHYTsOAC7BMoAwFJZZEa8T3YcgF0CZZbaEFkFGQWA4a1yRhyA\n1SdQZqkNkVWQUQAAgONNoMzSW9WswlBj7HbJjAMAwHQEyjAnQ42x2yUzDgAA0xEowxytajYcAACO\ns9sM3QAAAABYJgJlAAAA6BEoAwAAQI9AGQAAAHoEygAAANAjUAYAAIAegTIAAAD0CJQBAACgR6AM\nAAAAPQJlAAAA6BEoAwAAQI9AGQAAAHoEygAAANAjUAYAAICe84ZuAAAsm62trWxubs71NcbjcUaj\n0dy2vw77AABDESgfE4u4YErmd9G0qPYnLvzguNva2srJkyezsbEx19fZ2dnJ9vb23M6Zq74PADAk\ngfIxsKgLpmQ+F02LbH/iwg+Ou83NzYWcbzY2NrK5uTmXc8067MNe8+gw1TEKwJkIlI+BRV0wJfO5\naFpk+5PFXvgBcG7z6jDVMQrAmQiUYcHmVUYuM8IymsX33XebeXWYDtUxOqu/A44NgPkRKMMCzbOM\nXGaEZTOr77vvNutkln8HHBsA8+P2ULBA8ywj382MwLKY1ffdd5t1Msu/A44NgPmRUT6Co5ROKZdi\nXTgO5u+w77H3FQDgaATKUzpq6ZRyKdaB42D+pnmPva8AAEej9HpKRy2dUi7FOnAczN8077H3FQDg\naATKAAAA0CNQBgAAgB6BMgAAAPQIlAEAAKBHoAwAAAA9AmUAAADoESgDAABAj0AZAAAAegTKAAAA\n0CNQBgAAgB6BMsASG4/H2dnZOdRzdnZ2Mh6P59QiAID1d97QDQDgzEajUba3t7O5uXng54zH44xG\nozm2CoBZ29raShLnb2bOd2s6AmWAJTcajfxxA9jHiRMncq973Suve93rcurUqaGbM7Wtra1cfvnl\nSZJnPetZK33OX5fPZF32w3drekqvpzRNOWSf0kjWgeOAsznq92OX7wnrZFbHReLYAJgnGeUpTVMO\n2ac0knXgOOBsjvr92OV7wjqZ1XGRODaS5NSpU3nVq141dDOObDQa5VnPetbN/19l6/KZrMt++G5N\nT6B8BMohwXHA2fl+MAu7WdiNjY2ZbneojKzjgv34TjAvvlvTESjDAs3rYi9RggezMs/jtG+ex+w6\n7EPfLLOwfTKyAJyJQBkWaF4Xe4kLPpiVeR6nffM8ZtdhH/aShQVgkQTKsGAu9mD5rcNxug77AABD\nMev1MTDLGTbPZR5leItsf6KEGQAAjjsZ5WNgUSV4yXzK8BbZ/kQJMwAAHHcC5WNi1UvwVr39AADA\n6lB6DQAAAD0CZQAAAOgRKAMAAECPQBkAAAB6BMoAAADQI1AGAACAHoEyAAAA9AiUAQAAoEegDAAA\nAD0CZQAAAOgRKAMAAECPQBkAAAB6BMoAAADQI1AGAACAHoEyAAAA9AiUAQAAoEegDAAAAD3nDd0A\n5m9rayubm5sLe73xeJzRaDSz7a16+4HVsqhzzrzPNeuyHwAwBIHymtva2srJkyezsbGxsNfc2dnJ\n9vb2TC6cVr39wGpZ5DlnnueaddkPABiK0us1t7m5udAgM0k2NjZmlsVY9fYDq2WR55x5nmvWZT8A\nYCgyygAACzLrknil7wDzIVAGDs2FHsDhzaMkXuk7wHwIlIFDcaG3OCdOnMhFF1009fN1QLCOZtFR\nN9SxMY+S+N3Sd8c6wGwJlIFDcaG3OI94xCNyySWXTP18HRCsm1l11Dk2ADiXwQPlCy+88NAZE1kS\n1sm02RHHwfo777yjnaJ1QLBuZtVR59gA4FwGD5QvvfTSXHzxxYd6jp5g1sVRsiOOA1bNYTqFdAQB\nAEMaPFCehp5g1sVRsiOOA1bJYTuFdAQBAENyH2UA5u6wnULuzQsADEmgDAAAAD0CZQAAAOgRKAMA\nAECPQBkAAAB6BMoAAADQI1AGAACAHoEyAAAA9AiUAQAAoEegDAAAAD0CZQDmbjweZ2dn58Dr7+zs\nZDwez7FFAABndt7QDQBg/Y1Go2xvb2dzc/NA64/H44xGozm3CgBgfysZKMs0sC52s2wbGxuHfq7j\ngFUzGo0EvwDAShg8UH7xi1+c1772tYd6jkwD6+KwWbY+x8H6u/HGG4/0fJ0prJujdC72OTZW2gVJ\ncv311w/djpnbvRbw3WTWfLduqXf+uOBs6w0eKF9//fU5//zzh24GDEaWjTN54QtfmGuuuWbq5+tM\nYd0cpXOxz7Gx0u6SJFdeeeXAzQDWwF2SvPpMDw4eKAOrZVYZnT7Znf2dOnUq73vf+4ZuBiyVVe5c\ndP6ciauSPDrJtUlOD9sUYEVdkC5IvupsKwmUgUOZVUanT3YHOA6cP4+utfbBJM8buh3AyjtjJnmX\nQBk4tFXO6KyItR2Dt+xuuOGGXH311Qt7veuuu24un/O67Mc6eu9737uQ1znoGDwA9jdkoOxCcAEW\nfbG0a1YXTavefuZnzS8C75IYgzeUK664YugmzMS67AdHdpccIHMCwC1Va22YF656VJLnDvLiwDp5\ndGttrcrwqur2SR4YY/CA6d08Bm9SrgzAIQwZKLsQBI7CRSAAAHMxWKAMAAAAy+g2QzcAAAAAlolA\nGQAAAHoEygAAANAjUAYAAIAegTIAAAD0CJRJVf11VT1xQa/1NVX15qqqRbzesqqqL6yq66rqgqHb\nAgAA3JJA+RipqsdW1d/v89CXJPmlBTXjp5L8ROvdl6yq7ldVr6uq01X19qp67ILack77dSJU1c9U\n1Yeq6qvO8rwnV9Wrq+qfqupv9z7eWntTkj9P8r2zbzUAAHAUAuU1UFUbB101ya1unN1a+2Br7fRs\nW7XPi1fdJ8ndklzZW3aXJL+T5BVJ7pnkPyV5dlU9YN7tOayquk1V/UqSy5Lcr7X2P86y+nlJXpDk\nF8+yzq8mecJxz64DAMCyESivoKr671X1jKr62ar6QJKXT5Z/f1W9oapOTcp6n1lVm5PH7pvkV5J8\nUlXdVFUfraofmzx2i6zp5PHvqqorJxnRt1fVN+1pw4Mny8dV9ftV9ZjJ87bO0vRHJPmD1toNvWWX\nJ3lXa+2HWmtva609M8lvJvn+GbxVM1NV56dr1/2T3Ke19vqzrd9ae0pr7eeT/OVZVrsqyR2T3Gdm\nDQUAAI5MoLy6/lWSjyS5d5LHT5Z9NMm/SfIFk8e/OslPTx57dZLvSzJKF5x9WpKfOcv2fyxdRvQe\nSX43yXOr6pOTpKrumuRF6TLD90zy7CRXZJ9s9R5fma7cuO/Lk/zhnmVXJfmKc2xrkT4xycuSfF6S\ne7fW3jGLjbbWPpLkDeneFwAAYEmcN3QDmNpftdae1F8wyWDuuq6qnpzkWUlOttZ2quofutXaBw6w\n/f+ntfYbSVJVP5zkiUkuSfL7Sf51krf2Xv+vquoeSX74HNu8c5L37ll2UZL371n2/iRbVfVxk2By\naE9O18Hw+a21D8542+9N974AAABLQkZ5db1u74Kq+tqq+sOq+puqGiX59SS3n3Jm5Tfu/qe1Nk4X\nKH7qZNHnJHntnvX/7ADb/Pgkcx8LPQdXJfmEJD+y94Gq+uWq+sfJz99Nse0PJ9k8agMBAIDZESiv\nrn/q/1JVd07y20len+Rbklyc5AmTh8+fYvs7e35vOfr35fokn7Jn2fvSlYL33THJaEmyyUk30dhD\nkjy+qn5uz2M/nK78/J5J7jXFtm+X5CAZfgAAYEGUXq+PeyWp1tq/3V1QVd+2Z50bktx2Bq/1tiRf\nv2fZJQd43jXpxk/3vWafbX3dZPnSaK394WRCs5dWVbXWvney/AM5WqD7z9Nl/gEAgCUho7w+3pFk\no6qeWFV3rarHpBtL3HdtkhNVdf+qun1VffyUr/WLST6vqn6yqj67qh6eZPfex2eb0Ouq3HqG5/+c\n5G5V9VNV9blV9T1JHpbk6YdpUFVdUVW/1vv9S6vqLVX1ab1lr5hsfyqttVck+cYk31VVzzhHe+5U\nVfdM8llJbltV95z8bPbWuXu6cvZXTNsmAABg9gTKq2m/eyG/IckPJPmhdOOLH5lk72Rfr0kXmL4w\nyd8m+cEzbG+/YPfmZa21a9MFsw9N8hfpAvKnTR4+W7n0c5P886r67D3b+pdJvjZd2fj3J/mu1trN\nM2FX1X0nt576rLNs+9OS3Kn3+2a6sdT9e0zfNcmFve1+e1XddJZtJnvei9baf5+097HnCJavSHJ1\nkh9N8smT/1+d5It66zwyye+11vZOcAYAAAyoWjvXHX3g3KrqR5I8rrV21hmcq+qnkmy11i4/xLa/\nI13Q/wWttY8eraW32O5Tk3xVa+3+s9rmIV77/CTvTPLQ1treW2YBAAADklFmKlV1eVV9Sa/M+98m\n+dUDPPWKJO8+5Ms9KMm/m2WQ3NvuD55zrfm4c5KnCpIBAGD5yCgzlap6epJHpJvF+rokz0nyk621\nc5UyAwAALDWBMgAAAPQovQYAAIAegTIAAAD0CJQBAACgR6AMAAAAPQJlAAAA6BEoAwAAQI9AGQAA\nAHoEygAAANDz/wOyhNCDPwGdgwAAAABJRU5ErkJggg==\n",
"text/plain": [
"<matplotlib.figure.Figure at 0x7f055e319ba8>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# bp.plot.hinton(theta)\n",
"# bp.plot.pyplot.title(u'Learned distribution of theta')\n",
"# bp.plot.pyplot.show()\n",
"\n",
"fig = plt.figure(figsize=(12,5))\n",
"\n",
"normed_data = data / np.sum(data, axis=1).reshape(numberOfMovies,1)\n",
"plt.subplot(131)\n",
"hinton(normed_data.T)\n",
"plt.title('$\\\\theta_{ML}$')\n",
"plt.ylabel('movie')\n",
"plt.xlabel('rating (0, ..., K-1)')\n",
"\n",
"# p = np.exp(theta._message_to_child()[0])\n",
"theta_pars = theta.get_parameters()[0]\n",
"p = theta_pars / np.sum(theta_pars, axis=1).reshape(numberOfMovies,1)\n",
"plt.subplot(132)\n",
"hinton(p.T)\n",
"plt.title('bayespy')\n",
"\n",
"plt.subplot(133)\n",
"hinton(pymc_inferred.T)\n",
"plt.title('pyMC3')\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Some Statistics and Comparisons"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"normalizedRatingCountMeans = data.sum(axis=0)/np.sum(data)\n",
"p_mean = np.mean(p, axis=0)\n",
"theta_mean = p_mean / p_mean.sum()\n",
"\n",
"levels = np.diag(np.arange(numberOfRatingLevels))\n",
"maximumLikelihoodRatingAverages = normed_data.dot(levels).sum(axis=1)\n",
"inferredRatingAverages = p.dot(levels).sum(axis=1)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Comparing the top-level prior parameter estimate and the maximum-likelihood estimate\n",
"shows how the model is pooling the data.\n",
"\n",
"Original data (movies x rating counts):\n",
"\n",
"[[ 0 0 0 0 0 2]\n",
" [ 0 0 0 0 1 4]\n",
" [ 0 1 1 2 4 2]\n",
" [ 0 3 3 6 12 6]\n",
" [ 0 0 2 4 5 1]\n",
" [ 0 1 3 3 2 1]\n",
" [ 1 0 4 4 1 2]]\n",
"\n",
"Maximum-likelihood probability of chosing a rating from the raw count data:\n",
"\n",
"[ 0.01234568 0.0617284 0.16049383 0.2345679 0.30864198 0.22222222]\n",
"\n",
"Mean over each Dirichlet distribution (bayespy):\n",
"(Note: there isn't really a \"pooled\" distribution in this case)\n",
"\n",
"[ 0.07647908 0.09830447 0.16378066 0.20048701 0.23033911 0.23060967]\n",
"\n",
"Top-level prior parameters (pyMC3):\n",
"\n",
"[0.02406627, 0.07585432, 0.17492069, 0.22285698, 0.250879, 0.25142274]\n"
]
}
],
"source": [
"print(\"\"\"\n",
"Comparing the top-level prior parameter estimate and the maximum-likelihood estimate\n",
"shows how the model is pooling the data.\"\"\")\n",
"\n",
"print(\"\"\"\n",
"Original data (movies x rating counts):\n",
"\"\"\")\n",
"print(data)\n",
"\n",
"print(\"\"\"\n",
"Maximum-likelihood probability of chosing a rating from the raw count data:\n",
"\"\"\")\n",
"print(normalizedRatingCountMeans)\n",
"\n",
"print(\"\"\"\n",
"Mean over each Dirichlet distribution (bayespy):\n",
"(Note: there isn't really a \"pooled\" distribution in this case)\n",
"\"\"\")\n",
"print(theta_mean)\n",
"\n",
"print(\"\"\"\n",
"Top-level prior parameters (pyMC3):\n",
"\"\"\")\n",
"print([ 0.02406627, 0.07585432, 0.17492069, 0.22285698, 0.250879, 0.25142274])"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Comparing the inferred estimates of the probability of chosing a rating for\n",
"individual movies against the respective maximum-likelihood estimates is\n",
"also informative.\n",
"\n",
"Rating count data for movies 1 and 2 below. Movie 1 has less than half the\n",
"ratings as movie 2 but all of its ratings are 5-star. Movie 2 has more\n",
"5-star ratings but also one 4-star rating. The maximum-likelihood estimate\n",
"would ignore the number of ratings and focus on the proportions. The smoothed\n",
"inferred estimate accounts for the number of ratings and also applies the\n",
"pooled prior estimate of the rating probabilities.\n",
"\n",
"[[0 0 0 0 0 2]\n",
" [0 0 0 0 1 4]]\n",
"\n",
"Maximum-likelihood rating probability estimates for movies 1 and 2:\n",
"\n",
"[[ 0. 0. 0. 0. 0. 1. ]\n",
" [ 0. 0. 0. 0. 0.2 0.8]]\n",
"\n",
"Inferred rating probability estimates for movies 1 and 2 (bayespy):\n",
"\n",
"[[ 0.125 0.125 0.125 0.125 0.125 0.375 ]\n",
" [ 0.09090909 0.09090909 0.09090909 0.09090909 0.18181818 0.45454545]]\n",
"\n",
"Inferred rating probability estimates for movies 1 and 2 (pyMC3):\n",
"\n",
"[[ 0.02144172 0.05540504 0.12788657 0.16527906 0.18168197 0.44830564]\n",
" [ 0.01063829 0.04511137 0.09015204 0.11610496 0.22394176 0.51405158]]\n",
"\n",
"The inferred estimate places higher probability on chosing a 5-star\n",
"rating for movie 2 versus movie 1 as opposed to the maximum-likelihood\n",
"probability.\n",
"\n"
]
}
],
"source": [
"print(\"\"\"\n",
"Comparing the inferred estimates of the probability of chosing a rating for\n",
"individual movies against the respective maximum-likelihood estimates is\n",
"also informative.\"\"\")\n",
"\n",
"print(\"\"\"\n",
"Rating count data for movies 1 and 2 below. Movie 1 has less than half the\n",
"ratings as movie 2 but all of its ratings are 5-star. Movie 2 has more\n",
"5-star ratings but also one 4-star rating. The maximum-likelihood estimate\n",
"would ignore the number of ratings and focus on the proportions. The smoothed\n",
"inferred estimate accounts for the number of ratings and also applies the\n",
"pooled prior estimate of the rating probabilities.\n",
"\"\"\")\n",
"\n",
"print(data[0:2])\n",
"\n",
"print(\"\"\"\n",
"Maximum-likelihood rating probability estimates for movies 1 and 2:\n",
"\"\"\")\n",
"print(normed_data[0:2])\n",
"\n",
"print(\"\"\"\n",
"Inferred rating probability estimates for movies 1 and 2 (bayespy):\n",
"\"\"\")\n",
"print(p[0:2])\n",
"\n",
"print(\"\"\"\n",
"Inferred rating probability estimates for movies 1 and 2 (pyMC3):\n",
"\"\"\")\n",
"print(\"\"\"[[ 0.02144172 0.05540504 0.12788657 0.16527906 0.18168197 0.44830564]\n",
" [ 0.01063829 0.04511137 0.09015204 0.11610496 0.22394176 0.51405158]]\"\"\")\n",
"\n",
"print(\"\"\"\n",
"The inferred estimate places higher probability on chosing a 5-star\n",
"rating for movie 2 versus movie 1 as opposed to the maximum-likelihood\n",
"probability.\n",
"\"\"\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Rating count data for movies 3 and 4 below. Movies 3 and 4 have identical\n",
"rating count proportions (and identical maximum-likelihood estimates) but\n",
"movie 4 has three times as many ratings as movie 3. The higher number\n",
"of ratings should push the inferred estimate of movie 4 closer to the\n",
"maximum- likelihood estimate as it should start to dominate the prior.\n",
"\n",
"[[ 0 1 1 2 4 2]\n",
" [ 0 3 3 6 12 6]]\n",
"\n",
"Maximum-likelihood rating probability estimates for movies 3 and 4:\n",
"\n",
"[[ 0. 0.1 0.1 0.2 0.4 0.2]\n",
" [ 0. 0.1 0.1 0.2 0.4 0.2]]\n",
"\n",
"Inferred rating probability estimates for movies 3 and 4 (bayespy):\n",
"\n",
"[[ 0.0625 0.125 0.125 0.1875 0.3125 0.1875 ]\n",
" [ 0.02777778 0.11111111 0.11111111 0.19444444 0.36111111 0.19444444]]\n",
"\n",
"Inferred rating probability estimates for movies 3 and 4 (pyMC3):\n",
"\n",
"[[ 0.00738755 0.09079506 0.12985281 0.20999552 0.3488494 0.21311968]\n",
" [ 0.00366227 0.09739613 0.10971591 0.2088047 0.36784195 0.21257903]]\n",
"\n",
"The inferred estimate for movie 4 is closer to the maximum-likelihood\n",
"estimate as we would expect given its higher number of overall counts.\n",
"\n"
]
}
],
"source": [
"print(\"\"\"\n",
"Rating count data for movies 3 and 4 below. Movies 3 and 4 have identical\n",
"rating count proportions (and identical maximum-likelihood estimates) but\n",
"movie 4 has three times as many ratings as movie 3. The higher number\n",
"of ratings should push the inferred estimate of movie 4 closer to the\n",
"maximum- likelihood estimate as it should start to dominate the prior.\n",
"\"\"\")\n",
"\n",
"print(data[2:4])\n",
"\n",
"print(\"\"\"\n",
"Maximum-likelihood rating probability estimates for movies 3 and 4:\n",
"\"\"\")\n",
"print(normed_data[2:4])\n",
"\n",
"print(\"\"\"\n",
"Inferred rating probability estimates for movies 3 and 4 (bayespy):\n",
"\"\"\")\n",
"print(p[2:4])\n",
"\n",
"print(\"\"\"\n",
"Inferred rating probability estimates for movies 3 and 4 (pyMC3):\n",
"\"\"\")\n",
"print(\"\"\"[[ 0.00738755 0.09079506 0.12985281 0.20999552 0.3488494 0.21311968]\n",
" [ 0.00366227 0.09739613 0.10971591 0.2088047 0.36784195 0.21257903]]\"\"\")\n",
"\n",
"print(\"\"\"\n",
"The inferred estimate for movie 4 is closer to the maximum-likelihood\n",
"estimate as we would expect given its higher number of overall counts.\n",
"\"\"\")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"To compute an average rating for each movie, a maximum-likelihood\n",
"approach could be used (in this case an average) or the inferred\n",
"rating probabilities can be used.\n",
"\n",
"Raw movie rating count data for reference (movie x rating counts):\n",
"\n",
"[[ 0 0 0 0 0 2]\n",
" [ 0 0 0 0 1 4]\n",
" [ 0 1 1 2 4 2]\n",
" [ 0 3 3 6 12 6]\n",
" [ 0 0 2 4 5 1]\n",
" [ 0 1 3 3 2 1]\n",
" [ 1 0 4 4 1 2]]\n",
"\n",
"Maximum-likelihood estimate of rating for each movie:\n",
"\n",
"[ 5. 4.8 3.5 3.5 3.41666667 2.9\n",
" 2.83333333]\n",
"\n",
"Inferred estimate of rating for each movie (bayespy):\n",
"\n",
"[ 3.125 3.54545455 3.125 3.33333333 3.11111111 2.75\n",
" 2.72222222]\n",
"\n",
"Inferred estimate of rating for each movie (pyMC3):\n",
"\n",
"[ 3.77527145 4.03975528 3.44148319 3.47750503 3.39474854 3.09424091\n",
" 3.00101262]\n",
"\n",
"The inferred estimate for movie 2 is higher than movie 1 as\n",
"expected compared to the maximum-likelihood estimates because\n",
"of the higher number of overall ratings for movie 2.\n",
"\n",
"The inferred estimate for movie 4 moves closer to the\n",
"maximum-likelihood estimate than movie 3 as expected because\n",
"of the higher number of ratings (the influence of the prior\n",
"can be changed with the concentration parameter).\n",
"\n",
"The inferred estimate for movie 7 is increased relative to\n",
"the maximum-likelihood estimate despite the presence of a\n",
"zero rating.\n"
]
}
],
"source": [
"print(\"\"\"\n",
"To compute an average rating for each movie, a maximum-likelihood\n",
"approach could be used (in this case an average) or the inferred\n",
"rating probabilities can be used.\"\"\")\n",
"\n",
"print(\"\"\"\n",
"Raw movie rating count data for reference (movie x rating counts):\n",
"\"\"\")\n",
"print(data)\n",
"\n",
"print(\"\"\"\n",
"Maximum-likelihood estimate of rating for each movie:\n",
"\"\"\")\n",
"print(maximumLikelihoodRatingAverages)\n",
"\n",
"print(\"\"\"\n",
"Inferred estimate of rating for each movie (bayespy):\n",
"\"\"\")\n",
"print(inferredRatingAverages)\n",
"\n",
"print(\"\"\"\n",
"Inferred estimate of rating for each movie (pyMC3):\n",
"\"\"\")\n",
"print(\"\"\"[ 3.77527145 4.03975528 3.44148319 3.47750503 3.39474854 3.09424091\n",
" 3.00101262]\"\"\")\n",
"\n",
"print(\"\"\"\n",
"The inferred estimate for movie 2 is higher than movie 1 as\n",
"expected compared to the maximum-likelihood estimates because\n",
"of the higher number of overall ratings for movie 2.\"\"\")\n",
"\n",
"print(\"\"\"\n",
"The inferred estimate for movie 4 moves closer to the\n",
"maximum-likelihood estimate than movie 3 as expected because\n",
"of the higher number of ratings (the influence of the prior\n",
"can be changed with the concentration parameter).\"\"\")\n",
"\n",
"print(\"\"\"\n",
"The inferred estimate for movie 7 is increased relative to\n",
"the maximum-likelihood estimate despite the presence of a\n",
"zero rating.\"\"\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Discussion of Differences: bayespy vs. pyMC models\n",
"\n",
"The bayespy model constructed in this notebook is very simplistic; it specifies a fixed prior over the Dirichlet distributions. We can think of this prior as being uninformative because its concentration parameters are all equal to 1. This prior Dirichlet distribution corresponds to the uniform distribution over $\\theta$ -- which suggests that until we have data, the probability of seeing a particular rating could be anything! Thus, the prior says that:\n",
"\n",
"$\\mathbb{P}\\left(\\theta = [0,~0,~0,~0,~0,~1]\\right) = \\mathbb{P}\\left(\\theta = \\left[\\frac{1}{6},~\\frac{1}{6},~\\frac{1}{6},~\\frac{1}{6},~\\frac{1}{6},~\\frac{1}{6}\\right]\\right)$\n",
"\n",
"The bad thing about this model is that there is no real dependency enforced between the multinomial distributions for different movies ($\\theta_M$), thus there is no pooling. This is evident in $\\theta_1$ which shows that a ratings between 0 and 4 are equally likely (despite ratings for other movies suggesting otherwise). Interestingly, many of the properties of the model remain intact (see previous section). Overall, though, the inferred estimate of each movie's rating is significantly lower in the bayespy model compared to the pyMC model. This is perhaps the most glaring difference that shows some favor to the pyMC model.\n",
"\n",
"Attempts were made at replicating the process of specifying a collection of gammas for the prior over the concentration parameters of $\\theta_M$, as implemented in [1]() and described in [[2] in more detail](http://tdunning.blogspot.com/2010/04/sampling-dirichlet-distribution-revised.html). However, bayespy was not able to handle all of the dependencies between the random variables properly. There is another approach to consider, which uses a redundantly-parameterized normal distribution as a prior over Dirichlet distributions. The approach is described in [3](http://andrewgelman.com/2009/04/29/conjugate_prior/), and may be attempted in the near future."
]
}
],
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.2"
}
},
"nbformat": 4,
"nbformat_minor": 1
}
@BradLee77
Copy link

Cool! I like your matrix visualization as well :)

@avonmoll
Copy link
Author

I should have mentioned: the code to generate those comes from the matplotlib docs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment