Skip to content

Instantly share code, notes, and snippets.

@ricardoV94
Last active January 26, 2023 17:16
Show Gist options
  • Save ricardoV94/1cbc915b87e58dd3206d4f3526cdf593 to your computer and use it in GitHub Desktop.
Save ricardoV94/1cbc915b87e58dd3206d4f3526cdf593 to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@ColtAllen
Copy link

I also did my own implementation of the refactored likelihood expression with scipy and numpy functions, and the results match that of the lifetimes likelihood:

def logp_alpha_gt_beta(r, alpha, s, beta, x, t_x, T):
    alpha_gt_beta_refactored = r_s_x*log(alpha+t_x)

    alpha_gt_beta_hyp2f1_t1 = log(hyp2f1(r_s_x,s+1,r_s_x+1,(alpha-beta)/(alpha+t_x)))
    alpha_gt_beta_hyp2f1_t2 = log(hyp2f1(r_s_x,s,r_s_x+1,(alpha-beta)/(alpha+T)))-r_s_x*log(alpha+T)+alpha_gt_beta_refactored

    alpha_gt_beta_A1 = gammaln(r + x) - gammaln(r) + r * np.log(alpha) + s * log(beta) - alpha_gt_beta_refactored
    alpha_gt_beta_A2 = log(s) - log(r_s_x) + alpha_gt_beta_hyp2f1_t1
    alpha_gt_beta_A3 = log(r + x) - log(r_s_x) + alpha_gt_beta_hyp2f1_t2

    return alpha_gt_beta_A1 + logaddexp(alpha_gt_beta_A2,alpha_gt_beta_A3)


def logp_beta_gt_alpha(r, alpha, s, beta, x, t_x, T):

    beta_gt_alpha_refactored = r_s_x*log(beta+t_x)

    beta_gt_alpha_hyp2f1_t1 = log(hyp2f1(r_s_x,r+x,r_s_x+1,(beta-alpha)/(beta+t_x)))
    beta_gt_alpha_hyp2f1_t2 = log(hyp2f1(r_s_x,r+x+1,r_s_x+1,(beta-alpha)/(beta+T)))-r_s_x*log(beta+T)+beta_gt_alpha_refactored

    beta_gt_alpha_A1 = gammaln(r + x) - gammaln(r) + r * np.log(alpha) + s * log(beta) - beta_gt_alpha_refactored
    beta_gt_beta_A2 = log(s) - log(r_s_x) + beta_gt_alpha_hyp2f1_t1
    beta_gt_alpha_A3 = log(r + x) - log(r_s_x) + beta_gt_alpha_hyp2f1_t2

    return beta_gt_alpha_A1 + logaddexp(beta_gt_beta_A2,beta_gt_alpha_A3)
        

def logp_switch(r, alpha, s, beta, x, t_x, T):

    if alpha >= beta:
        return logp_alpha_gt_beta(r, alpha, s, beta, x, t_x, T)

    else:
        return logp_beta_gt_alpha(r, alpha, s, beta, x, t_x, T)

Something is amiss with the pytensor functions we're using. Does the new hyp2f1 op perhaps require a second look?

@ricardoV94
Copy link
Author

ricardoV94 commented Jan 26, 2023

I am not getting the right results with your implementation @ColtAllen. I added it to the newest gist. I assumed r_s_x = r * s * x as that was missing

@ricardoV94
Copy link
Author

ricardoV94 commented Jan 26, 2023

I added an implementation of the first function with PyTensor and it gets the right results: ref1_pytensor

@ColtAllen
Copy link

I am not getting the right results with your implementation @ColtAllen. I added it to the newest gist. I assumed r_s_x = r * s * x as that was missing

No; it's r+s+x; sorry that it wasn't inside of the function in the notebook I used.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment