Skip to content

Instantly share code, notes, and snippets.

@jcbozonier
Last active January 16, 2017 12:45
Show Gist options
  • Save jcbozonier/0b4b159dfd162d42089cbaf3afba9f6c to your computer and use it in GitHub Desktop.
Save jcbozonier/0b4b159dfd162d42089cbaf3afba9f6c to your computer and use it in GitHub Desktop.
Evaluating Hypotheses
# We'll be working in log space as an optimization
# So that our numbers don't get so small that the computer
# rounds them to zero.
# Start with an uninformative prior
# Every hypothesis gets an equal weighting.
# Not important that they sum to one, just that
# they're equally weighted.
hypothesis_likelihoods = np.log(np.array([1]))
for i, hypotheses_tuple in enumerate(hypotheses):
a_hypothesis, h_hypothesis, k_hypothesis, sigma_hypothesis = hypotheses_tuple
for x_datum, y_datum in zip(x_data, observed_y):
# For the given hypothetical value of each parameter
# let's compute a prediction and the error
predicted_y = a_hypothesis * (x_datum - h_hypothesis)**2 + k_hypothesis
prediction_error = y_datum - predicted_y
# On average, the error should be centered around zero
# just like in a standard regression (aka normal residuals).
y_probability = ss.norm.pdf(0, loc=prediction_error, scale=sigma_hypothesis)
# Multiply probability density but
# we're in log space so just add them
hypothesis_likelihoods[i] += np.log(y_probability)
hypothesis_probabilities = normalize(hypothesis_likelihoods)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment