Skip to content

Instantly share code, notes, and snippets.

@mattjj
Created November 20, 2017 16:58
Show Gist options
  • Save mattjj/581b527e49a66e9f3c46cc625771bee4 to your computer and use it in GitHub Desktop.
Save mattjj/581b527e49a66e9f3c46cc625771bee4 to your computer and use it in GitHub Desktop.
import numpy as np
def conjugate_gradient(A, b, iters, eps):
"Approximate A^{-1} b for positive definite A using conjugate gradient."
v = np.zeros_like(b) # initial approximate solution
r = b # initial residual vector
rho = rho_prev = np.dot(r, r) # initial residual norm squared
tol = eps * np.sqrt(rho) # residual norm stop criterion
for k in range(iters):
if np.sqrt(rho) < tol: break
w = A(b)
alpha = rho / np.dot(b, w)
v = v + alpha * b
r = r - alpha * w
rho, rho_prev = np.dot(r, r), rho
b = r + (rho / rho_prev) * b
return v
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment