Skip to content

Instantly share code, notes, and snippets.

@feriat
Created April 6, 2015 16:45
Show Gist options
  • Save feriat/c43fcd9e2667fbab2ede to your computer and use it in GitHub Desktop.
Save feriat/c43fcd9e2667fbab2ede to your computer and use it in GitHub Desktop.
Standard Errors Computation
def se(self,params):
"""
Return standard errors at optimum point, NOT SE
"""
hess = self.hessian(params)
return np.sqrt(np.linalg.inv(-hess).diagonal())
def hessian (self, x0, epsilon=1.e-8, *args ):
"""
A numerical approximation to the Hessian matrix of arbitrary function f at
location x0 (hopefully, the minimum)
AUTHOR: jgomezdans , https://gist.github.com/jgomezdans
"""
# ``f`` is the cost function
f = self.loglike
# The next line calculates the first derivative
f1 = self.score(x0)
# Allocate space for the hessian
n = x0.shape[0]
hessian = np.zeros ( ( n, n ) )
# The next loop fill in the matrix
xx = x0
for j in xrange( n ):
xx0 = xx[j] # Store old value
xx[j] = xx0 + epsilon # Perturb with finite difference
# Recalculate the partial derivatives for this new point
f2 = self.score(xx)
hessian[:, j] = (f2 - f1)/epsilon # scale...
xx[j] = xx0 # Restore initial value of x0
return hessian
@feriat
Copy link
Author

feriat commented Apr 6, 2015

PROBLEM
Standard errors are horribly wrong, converge to their populational analogs only on huge dataframes (over 1000 observation, run Monte-Carlo with 10k replication of each number of observations( 125, 250, 500, 1000)

@feriat
Copy link
Author

feriat commented Apr 6, 2015

As a possible guess of improvement I could have taken 2-sided differentiation, but I'm pretty sure that's not gonna work, really

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment