Skip to content

Instantly share code, notes, and snippets.

@ricardoV94
Last active March 18, 2025 03:41
Show Gist options
  • Save ricardoV94/c421ddacb3ba7a19ac46efa253ea466c to your computer and use it in GitHub Desktop.
Save ricardoV94/c421ddacb3ba7a19ac46efa253ea466c to your computer and use it in GitHub Desktop.
rosetta_stone_pymc_stan.ipynb
Display the source blob
Display the rendered blob
Raw
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"toc_visible": true,
"authorship_tag": "ABX9TyOq6bvFhXaSXJ3+9UdQv7Ip",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/gist/ricardoV94/c421ddacb3ba7a19ac46efa253ea466c/rosetta_stone_pymc_stan.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"source": [
"# A rosetta stone for PyMC and Stan"
],
"metadata": {
"id": "Sdp0Vl7pKKjj"
}
},
{
"cell_type": "markdown",
"source": [
"This notebook discusses how models can be translated between PyMC and Stan, highlightling syntax differences as well as general emphasis in how a model is constructed and used between the two libraries."
],
"metadata": {
"id": "nI3lDw-pKSAz"
}
},
{
"cell_type": "code",
"source": [
"%%capture\n",
"!pip install pystan"
],
"metadata": {
"id": "Uc8-PyEIRB8T"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"import numpy as np\n",
"\n",
"import stan\n",
"import nest_asyncio\n",
"\n",
"from pymc import (\n",
" Model,\n",
" Data,\n",
" Potential,\n",
" CustomDist,\n",
" Flat,\n",
" Normal,\n",
" Exponential,\n",
" HalfNormal,\n",
" observe,\n",
" draw,\n",
" sample_prior_predictive,\n",
")\n",
"from pymc.distributions import transforms\n",
"from pymc import math"
],
"metadata": {
"id": "vt4UPvCxQyAi"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"nest_asyncio.apply() # Needed for stan in a Jupyter/Colab environment"
],
"metadata": {
"id": "ilClJudBSkFr"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"## A simple model"
],
"metadata": {
"id": "Xik9csu6Kx9T"
}
},
{
"cell_type": "markdown",
"source": [
"Let's define a very simple model in both libraries.\n",
"\n",
"$$\n",
"\\begin{align*}\n",
"x_i &\\sim \\text{Exponential}(1) \\quad \\text{for } i = 1, 2, 3 \\\\\n",
"\\sigma_i &= x_i + 1 \\\\\n",
"y_i &\\sim \\text{Normal}(0, \\sigma_i) \\quad \\text{for } i = 1, 2, 3 \\\\\n",
"\\end{align*}\n",
"$$"
],
"metadata": {
"id": "Zp3Qjzm2K4tE"
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "1_i7fGCYP-uH"
},
"outputs": [],
"source": [
"with Model() as pymc_model:\n",
" x = Exponential(\"x\", 1, shape=(3,))\n",
" sigma = x + 1\n",
" Normal(\"y\", 0, sigma, observed=[1, 2, 3])"
]
},
{
"cell_type": "code",
"source": [
"stan_model_code = \"\"\"\n",
"data {\n",
" int< lower=0 > N;\n",
" vector[N] y;\n",
"}\n",
"\n",
"parameters {\n",
" vector< lower=0 >[3] x;\n",
"}\n",
"\n",
"model {\n",
" x ~ exponential(1);\n",
" vector[N] sigma = x + 1;\n",
" y ~ normal(0, sigma);\n",
"}\n",
"\"\"\"\n",
"\n",
"data = {\n",
" 'N': 3,\n",
" 'y': [1, 2, 3]\n",
"}\n",
"stan_model = stan.build(program_code=stan_model_code, data=data)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "_ybxeIKlQjOm",
"outputId": "585317ca-bc19-4082-c954-3a3087eba81a"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Building...\n"
]
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"\n",
"Building: 57.6s, done."
]
}
]
},
{
"cell_type": "markdown",
"source": [
"Before we compare the syntaxes, let's confirm they are equivalent, by checking the log_prob matches.\n",
"\n",
"By default STAN drops normalizing constants out of densities, whereas PyMC does not, so we'll have to check that the delta between two points matches."
],
"metadata": {
"id": "Hnw4OkYGTa2d"
}
},
{
"cell_type": "code",
"source": [
"x_log_test_value1 = [-1, 0, 1]\n",
"x_log_test_value2 = [1, 1, 2]"
],
"metadata": {
"id": "a_kwgSyLTKCt"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"res1 = stan_model.log_prob(x_log_test_value1)\n",
"res2 = stan_model.log_prob(x_log_test_value2)\n",
"f\"{res1=:.3f}, {res2=:.3f}, {res1-res2=:.3f}\""
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"id": "_9GTEzUrQ88E",
"outputId": "97359799-1b39-4c1f-86e3-d9d1214066f3"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"'res1=-7.499, res2=-13.824, res1-res2=6.325'"
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
}
},
"metadata": {},
"execution_count": 7
}
]
},
{
"cell_type": "code",
"source": [
"pymc_log_prob = pymc_model.compile_logp()\n",
"res1 = pymc_log_prob({\"x_log__\": x_log_test_value1})\n",
"res2 = pymc_log_prob({\"x_log__\": x_log_test_value2})\n",
"f\"{res1=:.3f}, {res2=:.3f}, {res1-res2=:.3f}\""
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"id": "hzUNvbIFTPYa",
"outputId": "d94ce2e1-35a3-4eb3-e1c6-bce0a5c4b5e5"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"'res1=-10.255, res2=-16.581, res1-res2=6.325'"
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
}
},
"metadata": {},
"execution_count": 8
}
]
},
{
"cell_type": "markdown",
"source": [
"## Line by line comparison"
],
"metadata": {
"id": "6_GrpV1kUuzD"
}
},
{
"cell_type": "markdown",
"source": [
"### Data"
],
"metadata": {
"id": "xKac5bDPxWx9"
}
},
{
"cell_type": "markdown",
"source": [
"The first Stan block defines the data by specifying it's a vector of length `N`, called `y`.\n",
"\n",
"```stan\n",
"data {\n",
" int< lower=0 > N;\n",
" vector[N] y;\n",
"}\n",
"```\n",
"\n",
"This is all defined implicitly in the PyMC model by specifying `observed = [1, 2, 3]`, but can be done explicitly by using a `Data` container:\n",
"\n",
"```python\n",
"with pm.Model() as pymc_model:\n",
" y_data = pm.Data(\"y_data\", [1, 2, 3])\n",
" ...\n",
" pm.Normal(\"y\", ..., observed=y_data)\n",
"```\n",
"\n",
"Note we have to give a different name to the data container and the y variable. This is so we can access either by name later (`pymc_model[\"y\"]` and `pymc_model[\"y_data\"]`)"
],
"metadata": {
"id": "o7uaSpn9U_5l"
}
},
{
"cell_type": "markdown",
"source": [
"### Parameters"
],
"metadata": {
"id": "5brozJ-XWW5K"
}
},
{
"cell_type": "markdown",
"source": [
"One of the largest differences between the two libraries, is that in Stan you have to explicitly define the model parameters and constraints, separately from how they \"are distributed\"."
],
"metadata": {
"id": "oq2mDSmOxoyx"
}
},
{
"cell_type": "markdown",
"source": [
"```\n",
"parameters {\n",
" vector< lower=0 >[3] x;\n",
"}\n",
"```\n",
"\n"
],
"metadata": {
"id": "91DrPv_jxgqL"
}
},
{
"cell_type": "markdown",
"source": [
"In PyMC a parameter is implictly defined everytime a \"named variable\" is defined inside a model.\n",
"\n",
"```ptyhon\n",
"x = Exponential(\"x\", 1, shape=(3,))\n",
"```\n",
"\n",
"Note the repetition of `x`. The Python variable name need not match the model variable name, and you can also reassign it to other Python variables just like you would expect from python objects.\n",
"\n",
"If you do not need to use `x`, you don't need to assign it to any python Variable, like we do with `y` later.\n",
"\n",
"\n",
"The variable assigned to `x` in PyMC is rather different in nature than the Stan one. It is not a \"parameter\", but a \"random variable\" that you can evaluate to take random draws."
],
"metadata": {
"id": "YmAeQQ4Xx2_S"
}
},
{
"cell_type": "code",
"source": [
"draw(x)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "wrTS4u-iyNpP",
"outputId": "120de97f-b760-4987-a030-5cee3393546a"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"array([0.83493973, 0.50597248, 1.46389397])"
]
},
"metadata": {},
"execution_count": 9
}
]
},
{
"cell_type": "markdown",
"source": [
"The actual parameter is tucked away inside the model in `rvs_to_values`:"
],
"metadata": {
"id": "0s8QoIsjySIb"
}
},
{
"cell_type": "code",
"source": [
"pymc_model.rvs_to_values[x]"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "0MKAGg3TyRMC",
"outputId": "c588c3e7-7b71-400e-9abf-8eff19ac1ecb"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"x_log__"
]
},
"metadata": {},
"execution_count": 10
}
]
},
{
"cell_type": "markdown",
"source": [
"Unlike Stan, the positive constrain tranform is automatically applied, because in regular applications of PyMC a parameter \"belongs\" to a specific distribution.\n",
"\n",
"Because, an `Exponential` is a positive-domain distribution, PyMC applies a default log transform to the parameter `x`, that encodes the constraint and frees the samplers from boundary conditions.\n",
"\n",
"The transform can be found in `rvs_to_transforms`"
],
"metadata": {
"id": "EoForEBjyjyK"
}
},
{
"cell_type": "code",
"source": [
"pymc_model.rvs_to_transforms[x]"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "9Hv87P_cWMfA",
"outputId": "8eafbe55-504d-4343-ca86-57f3e48be996"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"<pymc.logprob.transforms.LogTransform at 0x790d40ddca10>"
]
},
"metadata": {},
"execution_count": 11
}
]
},
{
"cell_type": "markdown",
"source": [
"The constraint can be customized/disabled by passing an approriate value to `default_transform` when defining the random variable as in:\n",
"\n",
"`x = Exponential(\"x\", default_transform=None)`"
],
"metadata": {
"id": "1sxlW5jWWYdL"
}
},
{
"cell_type": "markdown",
"source": [
"Both PyMC and Stan apply the jacobian correction of the transformation by default, so that the prior is respected when sampling from the unconstrained space. Both can also be opted out:"
],
"metadata": {
"id": "nEh2DMLizbto"
}
},
{
"cell_type": "code",
"source": [
"res1 = stan_model.log_prob(x_log_test_value1, adjust_transform=False)\n",
"res2 = stan_model.log_prob(x_log_test_value2, adjust_transform=False)\n",
"f\"{res1=:.3f}, {res2=:.3f}, {res1-res2=:.3f}\""
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"id": "mBTqqdtLJzHk",
"outputId": "4d09c679-aff1-4ca9-a290-53daf61a682e"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"'res1=-7.499, res2=-17.824, res1-res2=10.325'"
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
}
},
"metadata": {},
"execution_count": 12
}
]
},
{
"cell_type": "code",
"source": [
"pymc_log_prob = pymc_model.compile_logp(jacobian=False)\n",
"res1 = pymc_log_prob({\"x_log__\": x_log_test_value1})\n",
"res2 = pymc_log_prob({\"x_log__\": x_log_test_value2})\n",
"f\"{res1=:.3f}, {res2=:.3f}, {res1-res2=:.3f}\""
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"id": "-KSDHhwq2Qw9",
"outputId": "30628499-f016-4e8c-a711-fbff16f58601"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"'res1=-10.255, res2=-20.581, res1-res2=10.325'"
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
}
},
"metadata": {},
"execution_count": 13
}
]
},
{
"cell_type": "markdown",
"source": [
"### Intermediate variables"
],
"metadata": {
"id": "ynXDFB09zyzi"
}
},
{
"cell_type": "markdown",
"source": [
"In Stan, if you explicitly assign intermediate variables, you have to define their type:\n",
"\n",
"```stan\n",
"vector[N] sigma = x + 1;\n",
"```\n",
"\n",
"\n",
"In PyMC the type is inferred automatically\n",
"```python\n",
"sigma = x + 1\n",
"```"
],
"metadata": {
"id": "NbNGXycXz4cz"
}
},
{
"cell_type": "code",
"source": [
"sigma.type"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "73xRPffe0Ya8",
"outputId": "25636268-26b3-4b7f-bffc-9811c14bcdef"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"TensorType(float64, shape=(3,))"
]
},
"metadata": {},
"execution_count": 14
}
]
},
{
"cell_type": "markdown",
"source": [
"Both Stan and PyMC allow you to save these intermediate variables during sampling.\n",
"\n",
"For Stan you include the variable definition inside a `generated quantities` block.\n",
"\n",
"For PyMC you wrap the variable in a `Deterministic` block (even though the variable need not be deterministic!)."
],
"metadata": {
"id": "-M9rp0TY0bFo"
}
},
{
"cell_type": "markdown",
"source": [
"### Likelihood"
],
"metadata": {
"id": "Z4ed24Wi084s"
}
},
{
"cell_type": "markdown",
"source": [
"In Stan, after having specified `y` is data, we assign its density like we do with a regular parameter\n",
"\n",
"```stan\n",
"y ~ normal(0, sigma);\n",
"```\n",
"\n",
"In PyMC you pass the observed argument instead.\n",
"\n",
"```python\n",
"pm.Normal(\"y\", ..., observed=y_data)\n",
"```\n",
"\n",
"Because we don't intend to use `y` anywhere else, we didn't assign it to a Python variable. We can still retrieve it, and we'll see that like `x` this is just another \"random variable\"."
],
"metadata": {
"id": "z4XzUoYZ20D_"
}
},
{
"cell_type": "code",
"source": [
"y = pymc_model[\"y\"]\n",
"draw(y)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "1JToXgXv2zbx",
"outputId": "4b74cd88-91d0-49e1-ad88-c4356f5e1ec9"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"array([ 5.42768049, -7.29585954, 2.58802198])"
]
},
"metadata": {},
"execution_count": 15
}
]
},
{
"cell_type": "markdown",
"source": [
"The only immediate sign of the `observed` argument is the shape is automatically derived from the observed data.\n",
"\n",
"Under the hood, the variable is distinguished by being stored in a separate model property: `observed_RVs`, instead of `free_RVs`"
],
"metadata": {
"id": "QsE6OCe00XRY"
}
},
{
"cell_type": "code",
"source": [
"pymc_model.observed_RVs, pymc_model.free_RVs"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "p2YF-dpmz2gh",
"outputId": "65d9bc06-2320-4083-d113-3190a85ba684"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"([y], [x])"
]
},
"metadata": {},
"execution_count": 16
}
]
},
{
"cell_type": "markdown",
"source": [
"The `observed` constitutes the \"value\" of the `y` RV, which is a constant, not a parameter."
],
"metadata": {
"id": "o0dafWks31la"
}
},
{
"cell_type": "code",
"source": [
"pymc_model.rvs_to_values[y]"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "PzMNhpCn3yoW",
"outputId": "ebf883b6-6e57-4e32-9e4c-1f606cdf9060"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"TensorConstant(TensorType(float64, shape=(3,)), data=array([1., 2., 3.]))"
]
},
"metadata": {},
"execution_count": 17
}
]
},
{
"cell_type": "markdown",
"source": [
"### Aside: observed can be deferred.\n",
"\n",
"PyMC introduced a `pm.observe` model transformation, which allows one to define the observed quantities at a later time. In this case the shape (or dims) have to be provided explicitly."
],
"metadata": {
"id": "oF_AhqVn4GkI"
}
},
{
"cell_type": "code",
"source": [
"with Model() as raw_model:\n",
" # Not assigning to Python variables to avoid shadowing the ones from the original example\n",
" Exponential(\"x\", 1, shape=(3,))\n",
" Normal(\"y\", 0, raw_model[\"x\"] + 1, shape=(3,))\n",
"\n",
"print(raw_model.free_RVs, raw_model.observed_RVs)\n",
"\n",
"observed_model = observe(raw_model, {\"y\": [1, 2, 3]})\n",
"\n",
"print(observed_model.free_RVs, observed_model.observed_RVs)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "Qd483_Y04r50",
"outputId": "63f3c825-a9b4-45fc-8a9b-35507fa22ecd"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"[x, y] []\n",
"[x] [y]\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"These shapes can be defined in a way that the lengths are not pre-commited, but the user must still pre-commit to the dimensionality (`ndim`) of the variable."
],
"metadata": {
"id": "DdKrkv3E4sRN"
}
},
{
"cell_type": "markdown",
"source": [
"## Forward draws"
],
"metadata": {
"id": "7e43BB2r5WR4"
}
},
{
"cell_type": "markdown",
"source": [
"When we define a PyMC model we are actually defining the random graph of the model, from which we can take draws by simply evaluating the nodes in a \"forward sampling scheme\"."
],
"metadata": {
"id": "241gyPmz5ZGj"
}
},
{
"cell_type": "code",
"source": [
"draw(x)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "hEhdXe275X3I",
"outputId": "472b468c-a61e-4a9d-f09b-79852f213777"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"array([0.14709294, 0.29718136, 2.39086696])"
]
},
"metadata": {},
"execution_count": 19
}
]
},
{
"cell_type": "code",
"source": [
"draw(y)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "2CczQoWc53De",
"outputId": "741c225a-6b09-4b3f-a5be-b088e27d820a"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"array([-1.21047051, 2.66030059, -2.09453266])"
]
},
"metadata": {},
"execution_count": 20
}
]
},
{
"cell_type": "markdown",
"source": [
"When calling `draw(y)` alone, new draws of `x` are implicitly drawn as well, just not returned.\n",
"\n",
"One can request draws from multiple variables simultaneously to get the joint \"consistent\" forward samples of both random variables."
],
"metadata": {
"id": "J7xqUe0H56r7"
}
},
{
"cell_type": "code",
"source": [
"draw([x, y])"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "IIFGEOKl54iP",
"outputId": "f05652e4-849c-4fed-ccad-86839ca755a3"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"[array([1.96750042, 0.44927411, 0.48087822]),\n",
" array([ 0.84274034, -1.69644508, 1.58921521])]"
]
},
"metadata": {},
"execution_count": 21
}
]
},
{
"cell_type": "markdown",
"source": [
"PyMC does just this for prior predictive sampling, avoiding running a more expensive mcmc sampler/ worrying about convergence issues.\n",
"\n",
"Similarly, for posterior predictive sampling, PyMC takes forward draws of the observed variables, but reuses the posterior draws of the free variables instead of evaluating them again."
],
"metadata": {
"id": "hojmhnn-6Q2c"
}
},
{
"cell_type": "code",
"source": [
"implausible_posterior_x_sample = np.array([1, 10, 100.])\n",
"draw(y, givens={x: implausible_posterior_x_sample})"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "yFFU-WqH6s49",
"outputId": "d97ec18d-0bfc-4fd0-a9ca-90c97fc57d36"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"array([ -1.58951369, -10.38902359, 166.01541509])"
]
},
"metadata": {},
"execution_count": 22
}
]
},
{
"cell_type": "markdown",
"source": [
"To perform forward sampling in Stan one must redefine the equivalent variables using random number generators in the `generated_quantities` block.\n",
"\n",
"```stan\n",
"generated quantities {\n",
" vector[N] y_sim;\n",
" for (n in 1:N) {\n",
" y_sim[n] = normal_rng(0, sigma);\n",
" }\n",
"}\n",
"```\n",
"\n",
"(Not sure if vectorized rng functions are provided or must still be done in a loop)"
],
"metadata": {
"id": "fbbAsewg7rLU"
}
},
{
"cell_type": "markdown",
"source": [
"## The one parameter ↔ one distribution assumption"
],
"metadata": {
"id": "GwBvBAtI3_ym"
}
},
{
"cell_type": "markdown",
"source": [
"It's worth discussing this PyMC assumption a bit further, as it is where it deviates the most from Stan.\n",
"\n",
"In Stan it's valid and (straightforward) to assign multiple densities to the same parameter, or assign widely different densities to different dimensions of the parameter."
],
"metadata": {
"id": "i2xU7kIS4EtE"
}
},
{
"cell_type": "markdown",
"source": [
"Let's use a new example to illustrate this. It's almost the same as the previous model, except the first two entries of x are first assigned an Exponential density, and the last one a HalfNormal density.\n",
"\n",
"Then all entries are assigned a second Normal density on top of the first one."
],
"metadata": {
"id": "LD_WnxqnNiL_"
}
},
{
"cell_type": "markdown",
"source": [
"And a cringe mathematical-like definition:\n",
"\n",
"$$\n",
"\\begin{align*}\n",
"x_i &\\sim \\text{Exponential}(1) \\quad \\text{for } i = 1, 2 \\\\\n",
"x_3 &\\sim \\text{HalfNormal}(1) \\\\\n",
"x_i &\\sim \\text{Normal}(0, 1) \\quad \\text{for } i = 1, 2, 3 \\\\\n",
"y_i &\\sim \\text{Normal}(0, x_i + 1) \\quad \\text{for } i = 1, 2, 3 \\\\\n",
"\\end{align*}\n",
"$$"
],
"metadata": {
"id": "kKeEfmYCOCxj"
}
},
{
"cell_type": "code",
"source": [
"stan_model_code2 = \"\"\"\n",
"data {\n",
" int< lower=0 > N;\\\n",
" vector[N] y;\n",
"}\n",
"\n",
"parameters {\n",
" vector< lower=0 >[3] x;\n",
"}\n",
"\n",
"model {\n",
" x[1:2] ~ exponential(1);\n",
" x[3] ~ normal(0, 1);\n",
" x ~ normal(0, 1);\n",
" y ~ normal(0, x + 1);\n",
"}\n",
"\"\"\"\n",
"\n",
"data = {\n",
" 'N': 3,\n",
" 'y': [1, 2, 3]\n",
"}\n",
"\n",
"stan_model2 = stan.build(program_code=stan_model_code2, data=data)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "NBmTe5OR_PSi",
"outputId": "8139ee24-d184-41be-90c6-11356bf2ea19"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Building...\n"
]
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"\n",
"Building: found in cache, done.Messages from stanc:\n",
"Warning: The parameter x has 3 priors.\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"Stan emits a warning saying the parameter has 3 priors (technically it only has 2 ^^)\n",
"\n",
"Stan doesn't have a HalfNormal distribution, instead we get one implicitly by assiging a zero-centered Normal to a positively constrained parameter."
],
"metadata": {
"id": "LWOB3rH8_ya0"
}
},
{
"cell_type": "markdown",
"source": [
"PyMC doesn't really have an equivalent version of this model, at least not the sort of PyMC usage that you're likely to find in the wild. But we can achieve the same if we really want to.\n",
"\n",
"I'll show two approaches, to give a better picture."
],
"metadata": {
"id": "73TmiC-L_YcL"
}
},
{
"cell_type": "markdown",
"source": [
"### CustomDist for crazy distributions\n",
"\n",
"For the first two densities we can stay in PyMC-approved land, by using CustomDist."
],
"metadata": {
"id": "YMLT6WWqALof"
}
},
{
"cell_type": "code",
"source": [
"with Model() as pymc_model2:\n",
" def first_two_exponential_third_normal_dist(_):\n",
" return math.concatenate(\n",
" [Exponential.dist(1, shape=(2,),), HalfNormal.dist(1, shape=(1,))]\n",
" )\n",
"\n",
" x = CustomDist(\n",
" \"x\",\n",
" dist=first_two_exponential_third_normal_dist,\n",
" transform=transforms.log,\n",
" )\n",
" y = Normal(\"y\", 0, x + 1, observed=[0, 1, 2])"
],
"metadata": {
"id": "TuwU59KIAYz0"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"`CustomDist` accepts a callable as the `dist` argument, that defines an implicit distribution from a generative random graph. In this case the concatenation of two Exponential random variables and one HalfNormal random variable.\n",
"\n",
"We have to use the `.dist` to define floating random variables. These are not model variables and have no parameters, they exist just to define the graph that represents the distribution we care about. The actual random variable and parameters are the `\"x\"` defined by the `CustomDist`.\n",
"\n",
"We still live in the land of one parameter, one distribution, just a \"custom\" one. Because it was defined via the random graph, PyMC has no trouble taking draws from it:"
],
"metadata": {
"id": "SfzM-e7lBNEV"
}
},
{
"cell_type": "code",
"source": [
"draw(x)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "v3M3fDXJB53B",
"outputId": "b76438db-16f5-4b99-dce1-0f221c5bd123"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"array([0.99874554, 0.87974753, 0.74318369])"
]
},
"metadata": {},
"execution_count": 25
}
]
},
{
"cell_type": "markdown",
"source": [
"We no longer get default transforms, they have to be defined explicitly. I'm purposedly using the argument `transform` instead of `default_transform` here, to highlight this difference.\n",
"\n",
"**Note that the transforms don't have any effect in the random graph. Had we used a Normal instead of a HalfNormal, we would see negative values on the third entry half of the time.**"
],
"metadata": {
"id": "n5ha7BpqCPG0"
}
},
{
"cell_type": "markdown",
"source": [
"PyMC can also get the log_prob of our `CustomDist`, by reasoning symbolically about the random forward graph implied in the `dist` function. In this case it's a trivial question of pairing the right entries of the parameter with the respective densities, but it can be richer than this.\n",
"\n",
"If you are curious about what sorts of densities PyMC can infer from random graphs, check out [30 short puzzles on probability](https://colab.research.google.com/github/ricardoV94/probability-puzzles/blob/main/puzzles.ipynb)"
],
"metadata": {
"id": "BbCvtVrMCZBU"
}
},
{
"cell_type": "code",
"source": [
"pymc_model2.compile_logp()({\"x_log__\": [-1, 0, 1]})"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "e4omtzFgCzCg",
"outputId": "e7e8451a-607e-4a90-e698-ba764edfdb10"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"array(-10.63434397)"
]
},
"metadata": {},
"execution_count": 26
}
]
},
{
"cell_type": "markdown",
"source": [
"### Breaking free with Potentials"
],
"metadata": {
"id": "Yb6LRFa7C6gh"
}
},
{
"cell_type": "markdown",
"source": [
"For the second density assignment we have to break free with our dream of having automatically matching random and log_prob graphs under a single PyMC model.\n",
"\n",
"PyMC provides `Potential` to define arbitrary terms that should be added to the model joint logp, and which have no random counterpart."
],
"metadata": {
"id": "3kicC6AqC-nZ"
}
},
{
"cell_type": "code",
"source": [
"with pymc_model2:\n",
" Potential(\n",
" \"non_default_density_on_x\",\n",
" Normal.logp(x, 0, 1)\n",
" )"
],
"metadata": {
"id": "jSXJqltuDaLX"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"(Annoyingly you have to give them unique names...)"
],
"metadata": {
"id": "r9785_O7ERJr"
}
},
{
"cell_type": "markdown",
"source": [
"Now the two models should be equivalent"
],
"metadata": {
"id": "EFOMzZ3gEUQO"
}
},
{
"cell_type": "code",
"source": [
"res1 = stan_model2.log_prob(x_log_test_value1)\n",
"res2 = stan_model2.log_prob(x_log_test_value2)\n",
"f\"{res1=:.3f}, {res2=:.3f}, {res1-res2=:.3f}\""
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"id": "1snvneTcEdvK",
"outputId": "58586252-9e5a-4211-d5c5-bce18bd70827"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"'res1=-12.737, res2=-68.422, res1-res2=55.685'"
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
}
},
"metadata": {},
"execution_count": 28
}
]
},
{
"cell_type": "code",
"source": [
"pymc_log_prob = pymc_model2.compile_logp()\n",
"res1 = pymc_log_prob({\"x_log__\": x_log_test_value1})\n",
"res2 = pymc_log_prob({\"x_log__\": x_log_test_value2})\n",
"f\"{res1=:.3f}, {res2=:.3f}, {res1-res2=:.3f}\""
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"id": "TJjhoSsKEsUf",
"outputId": "52993698-c628-46fe-d797-49b7ec8a7e46"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"'res1=-17.653, res2=-73.981, res1-res2=56.328'"
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
}
},
"metadata": {},
"execution_count": 29
}
]
},
{
"cell_type": "markdown",
"source": [
"(It's not exactly the same though, so I may be missing something small, or we see rounding errors here)"
],
"metadata": {
"id": "7ar_vEeeKBdt"
}
},
{
"cell_type": "markdown",
"source": [
"And if you try to use a forward sampling routine like `sample_prior_predictive` or `sample_posterior_predictive` PyMC will warn you that Potential terms are simply ignored by those."
],
"metadata": {
"id": "ifsPmtt5E4yM"
}
},
{
"cell_type": "code",
"source": [
"with pymc_model2:\n",
" sample_prior_predictive()"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "4RrRZHBkE2dr",
"outputId": "bb0f90a4-8d18-4a7d-810a-472f4899c0d4"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stderr",
"text": [
"<ipython-input-30-72f58f10cd3e>:2: UserWarning: The effect of Potentials on other parameters is ignored during prior predictive sampling. This is likely to lead to invalid or biased predictive samples.\n",
" sample_prior_predictive()\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"We want to add a [second warning](https://github.com/pymc-devs/pymc/discussions/5404) for when a user defines a `transform` (not a `default_transform`), for similar reasons. Custom transforms don't have any effect on forward draws, and can cause a mismatch between the forward and logp sampling methods of the same model."
],
"metadata": {
"id": "MPlDl3vwRrur"
}
},
{
"cell_type": "markdown",
"source": [
"`draw` has no concept of a Model, and simply evaluates the random graph without any warnings."
],
"metadata": {
"id": "JonZi-rgFKok"
}
},
{
"cell_type": "code",
"source": [
"draw(x)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "H4dYhCyuEy6g",
"outputId": "67d092a0-9baa-4b55-b312-4b555d789b62"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"array([0.37580074, 0.11988222, 0.37292105])"
]
},
"metadata": {},
"execution_count": 31
}
]
},
{
"cell_type": "markdown",
"source": [
"## Writing PyMC models as if it were Stan"
],
"metadata": {
"id": "8af9ec7hFZVf"
}
},
{
"cell_type": "markdown",
"source": [
"It is possible, although not pretty, and likely silly, to translate Stan models to PyMC line-by-line.\n",
"\n",
"The trick is to force PyMC to create parameters without assigned-densities, using `Flat`, and providing custom `transform`s to define the constraints on these parameters.\n",
"\n",
"Once we have the variable returned by `Flat` we can chain arbitrary Potential terms like Stan does once you strip the syntactic sugar."
],
"metadata": {
"id": "VtM1vBenFhtP"
}
},
{
"cell_type": "code",
"source": [
"with Model() as pymc_model2_eq:\n",
" x = Flat(\"x\", shape=(3,), default_transform=transforms.log)\n",
"\n",
" Potential(\"x_term[1:2]\", Exponential.logp(x[:2], 1))\n",
" Potential(\"x_term[3]\", Normal.logp(x[2], 0, 1))\n",
" Potential(\"x_term\", Normal.logp(x, 0, 1))\n",
" Potential(\"y_term\", Normal.logp(np.array([0, 1, 2]), 0, x + 1))"
],
"metadata": {
"id": "CIcJPSFSza4T"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"pymc_log_prob = pymc_model2_eq.compile_logp()\n",
"res1 = pymc_log_prob({\"x_log__\": x_log_test_value1})\n",
"res2 = pymc_log_prob({\"x_log__\": x_log_test_value2})\n",
"f\"{res1=:.3f}, {res2=:.3f}, {res1-res2=:.3f}\""
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"id": "cFUTMrkwU8B4",
"outputId": "e9d10eb8-ffb9-4729-a969-986b4450f030"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"'res1=-18.347, res2=-74.674, res1-res2=56.328'"
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
}
},
"metadata": {},
"execution_count": 33
}
]
},
{
"cell_type": "markdown",
"source": [
"### Note on requesting densities from PyMC\n",
"\n",
"Final note, for brevity I used `Normal.logp(x, 0, 1)` syntax which is discouraged, as not all Distributions have a `.logp` method and the parametrization accepted by them can be non-intuitive. The \"valid\" way to get the logp would be:\n",
"\n",
"```python\n",
"from pymc import logp\n",
"\n",
"logp(Normal.dist(0, 1), x)\n",
"```\n",
"Where we create a dummy `Normal` random variate just to extract the corresponding logp term. This will work with any distribution / parametrization, as well as arbitrary graphs of random variables, as long as PyMC can derive its density."
],
"metadata": {
"id": "6AfmlSU1ZCbo"
}
},
{
"cell_type": "code",
"source": [],
"metadata": {
"id": "zNqZ7vIlH5dR"
},
"execution_count": null,
"outputs": []
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment