Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save jakelevi1996/600268e3200589957551607261b14394 to your computer and use it in GitHub Desktop.
Save jakelevi1996/600268e3200589957551607261b14394 to your computer and use it in GitHub Desktop.
Finding eigenvalues using simultaneous iteration

Finding eigenvalues using simultaneous iteration

This Gist describes simultaneous iteration, an eigenvalue algorithm for a symmetric NxN matrix A, which can be seen as a continuation of my previous Gist on a conceptually straightforward (albeit practically sub-optimal) eigenvalue algorithm for a symmetric NxN matrix. The algorithm presented here is more practically useful than the one presented in the previous Gist, although still not as practically useful as the implicit QR algorithm.

We want to find the eigenvalue decomposition for the symmetric matrix A; that is, we want to find matrices U and D which satisfy A == U * D * U^T , where U is orthogonal (IE U * U^T equals the identity matrix) and D is diagonal.

Using the power method, we can take an initial guess of an eigenvector u, and repeatedly replace u with the matrix-vector product A * u and normalise, which will gradually converge to the eigenvector of A corresponding to the largest eigenvalue of A, assuming that our initial guess u is not orthogonal to that eigenvector.

The trick with the simultaneous iteration method is to take an initial guess for the entire matrix of eigenvectors, U. As with the power method, each iteration we repeatedly replace U with the matrix-matrix product A * U, except now, instead of normalising, we orthogonalise U before starting the next iteration. We do this by calculating the QR factorisation of U, U = Q * R, where Q is orthogonal (IE Q * Q^T is equal to the identity matrix) and R is upper triangular, and replacing U with Q, which represents an orthogonal basis for U.

In QR factorisation, the first column of Q is simply the first column of U after is has been normalised. Therefore in simultaneous iteration, looking at the first column of U, denoted by u, each iteration we are just replacing u with A * u and normalising, which is equivalent to just performing the power method to find the eigenvector corresponding to the largest eigenvalue of A. Once the first column of U converges to this eigenvector however, since all of the columns of U are orthogonal to this eigenvector after each iteration, they can no longer converge to this eigenvector using the power method, so we expect that the second column of U will now converge to the eigenvector corresponding to the next largest eigenvector of A. Once the second column converges to this eigenvector, the remaining columns will be orthogonal to these two eigenvectors, and so cannot converge to either of them, so we expect that the third column will converge to the eigenvector corresponding to the third largest eigenvalue, and so on, until all of the columns of U have converged to the full set of eigenvectors for A.

The simultaneous iteration method is presented in the Python function eig in the script eig.py below, along with the outputs from running eig.py with N = 5 and N = 10.

import numpy as np
from time import perf_counter
np.set_printoptions(precision=3, linewidth=1000, suppress=True, threshold=1000)
np.random.seed(20)
norm = lambda x: np.sqrt(np.sum(np.square(x)))
def print_progress(i, U, U_old, AU):
""" Print current progress of the simultaneous power iteration, including
the current iteration number, the relative change in the approximation of
the eigenvectors of A, and the current approximation of the eigenvalues of A
"""
print(
"%02i: %.7f" % (i, norm(U - U_old) / norm(U_old)),
(U * AU).sum(axis=0)
)
def qr(A, N):
""" Perform a QR decomposition of the NxN matrix A. Q is returned, A is
modified in place to form R """
v = A[:, 0].copy()
v[0] -= norm(v)
beta = 2 / np.sum(np.square(v))
A -= beta * np.outer(v, v.T @ A)
Q = np.identity(N) - beta * np.outer(v, v)
for n in range(1, N - 1):
v = A[:, n].copy()
v[:n] = 0
v[n] -= norm(v)
beta = 2 / np.sum(np.square(v))
A -= beta * np.outer(v, v.T @ A)
Q -= beta * np.outer(Q @ v, v.T)
return Q
def eig(A, N, tol=1e-5, verbose=False, max_its=100):
""" Find the eigenvalues and eigenvectors for the symmetric NxN matrix A,
using simultaneous power iteration """
# Initial random guess of eigenvectors
U = np.random.normal(size=[N, N])
AU = A @ U
# Iterate until convergence (or too many iterations)
i = 0
while True:
U_old = U.copy()
# Replace U with Q in the QR decomposition of A @ U
U = qr(AU, N)
AU = A @ U
if verbose:
print_progress(i, U, U_old, AU)
# Check for convergence
if (norm(U - U_old) < (tol * norm(U_old))) or (i >= max_its):
break
i += 1
# Approximate eigenvalues using Rayleigh quotient and return
e_vals = (U * AU).sum(axis=0)
return e_vals, U
if __name__ == "__main__":
# Initialise matrix
N = 5
# N = 10
A = np.random.normal(size=[N, N])
A = A.T @ A
print("A =", A, sep="\n")
# Calculate eigenvalues
t_start = perf_counter()
D, U = eig(A, N, verbose=True)
# D, U = eig(A, N, verbose=False)
print("Time taken = %.3f s" % (perf_counter() - t_start))
# Print results
print("A =", A, sep="\n")
print("U =", U, sep="\n")
print("D =", D, sep="\n")
reconstruction_error = np.max(np.abs(A - (U @ np.diag(D) @ U.T)))
orthogonality_error = np.max(np.abs(np.identity(N) - (U @ U.T)))
print("Reconstruction error = %.7f" % reconstruction_error, sep="\n")
print("Orthogonality error = %.7f" % orthogonality_error, sep="\n")
A =
[[ 1.288 1.066 -0.292 -2.029 -1.004]
[ 1.066 11.341 -4.101 -3.868 1.559]
[-0.292 -4.101 5.425 2.478 0.592]
[-2.029 -3.868 2.478 9.335 3.927]
[-1.004 1.559 0.592 3.927 3.204]]
00: 0.8987118 [ 5.284 15.464 8.589 0.845 0.409]
01: 0.5736591 [13.032 9.79 6.742 0.792 0.235]
02: 0.3998935 [16.247 8.989 4.33 0.807 0.219]
03: 0.1739611 [16.689 9.284 3.592 0.809 0.217]
04: 0.0775967 [16.788 9.305 3.472 0.809 0.217]
05: 0.0374505 [16.817 9.293 3.455 0.809 0.217]
06: 0.0192241 [16.826 9.287 3.453 0.809 0.217]
07: 0.0102296 [16.828 9.284 3.453 0.809 0.217]
08: 0.0055465 [16.829 9.284 3.453 0.809 0.217]
09: 0.0030350 [16.829 9.283 3.453 0.809 0.217]
10: 0.0016679 [16.829 9.283 3.453 0.809 0.217]
11: 0.0009185 [16.829 9.283 3.453 0.809 0.217]
12: 0.0005063 [16.829 9.283 3.453 0.809 0.217]
13: 0.0002792 [16.829 9.283 3.453 0.809 0.217]
14: 0.0001540 [16.829 9.283 3.453 0.809 0.217]
15: 0.0000849 [16.829 9.283 3.453 0.809 0.217]
16: 0.0000468 [16.829 9.283 3.453 0.809 0.217]
17: 0.0000258 [16.829 9.283 3.453 0.809 0.217]
18: 0.0000143 [16.829 9.283 3.453 0.809 0.217]
19: 0.0000079 [16.829 9.283 3.453 0.809 0.217]
A =
[[ 1.288 1.066 -0.292 -2.029 -1.004]
[ 1.066 11.341 -4.101 -3.868 1.559]
[-0.292 -4.101 5.425 2.478 0.592]
[-2.029 -3.868 2.478 9.335 3.927]
[-1.004 1.559 0.592 3.927 3.204]]
U =
[[ 0.138 0.141 -0.165 -0.904 -0.34 ]
[ 0.692 -0.559 -0.315 -0.047 0.326]
[-0.385 0.115 -0.9 0.064 0.156]
[-0.583 -0.599 0.21 -0.357 0.36 ]
[-0.116 -0.543 -0.141 0.22 -0.79 ]]
D =
[16.829 9.283 3.453 0.809 0.217]
Reconstruction error = 0.0000574
Orthogonality error = 0.0000000
A =
[[ 3.954 -0.358 -3.121 -6.618 -3.274 1.398 -2.821 1.434 -2.253 3.112]
[-0.358 20.248 -1.956 -4.728 0.624 0.404 0.952 -6.958 2.52 8.028]
[-3.121 -1.956 6.66 4.984 2.415 -0.112 3.373 -3.785 0.944 -6.602]
[-6.618 -4.728 4.984 17.463 5.068 -0.346 1.095 1.848 1.062 -7.427]
[-3.274 0.624 2.415 5.068 10.343 3.806 3.509 0.946 -0.511 -2.276]
[ 1.398 0.404 -0.112 -0.346 3.806 12.151 -1.023 5.247 -2.123 -1.104]
[-2.821 0.952 3.373 1.095 3.509 -1.023 7.138 -2.119 4.115 -2.595]
[ 1.434 -6.958 -3.785 1.848 0.946 5.247 -2.119 14.314 -0.808 -2.685]
[-2.253 2.52 0.944 1.062 -0.511 -2.123 4.115 -0.808 7.811 3.147]
[ 3.112 8.028 -6.602 -7.427 -2.276 -1.104 -2.595 -2.685 3.147 15.251]]
00: 0.9213752 [27.248 26.964 10.196 17.78 12.457 3.683 12.388 3.267 1.344 0.007]
01: 0.3945252 [34.037 25.033 12.641 15.997 11.118 5.501 9.403 1.184 0.414 0.007]
02: 0.2650894 [36.207 24.857 15.903 12.216 11.339 5.926 7.417 1.051 0.411 0.007]
03: 0.1734132 [36.876 25.108 17.241 10.47 11.649 6.202 6.325 1.046 0.411 0.007]
04: 0.1094844 [37.1 25.317 17.527 9.985 11.788 6.356 5.8 1.045 0.411 0.007]
05: 0.0710334 [37.184 25.436 17.545 9.898 11.811 6.454 5.544 1.045 0.411 0.007]
06: 0.0487534 [37.218 25.496 17.521 9.929 11.774 6.528 5.405 1.045 0.411 0.007]
07: 0.0364780 [37.233 25.526 17.501 10.01 11.697 6.582 5.323 1.045 0.411 0.007]
08: 0.0308014 [37.24 25.54 17.489 10.126 11.584 6.621 5.272 1.045 0.411 0.007]
09: 0.0292965 [37.243 25.546 17.483 10.278 11.433 6.646 5.241 1.045 0.411 0.007]
10: 0.0298533 [37.245 25.549 17.48 10.465 11.247 6.663 5.222 1.045 0.411 0.007]
11: 0.0308735 [37.246 25.551 17.478 10.681 11.032 6.674 5.211 1.045 0.411 0.007]
12: 0.0313575 [37.246 25.551 17.477 10.911 10.802 6.68 5.204 1.045 0.411 0.007]
13: 0.0308155 [37.246 25.552 17.477 11.136 10.577 6.684 5.199 1.045 0.411 0.007]
14: 0.0291735 [37.246 25.552 17.477 11.34 10.374 6.687 5.197 1.045 0.411 0.007]
15: 0.0266539 [37.246 25.552 17.477 11.51 10.203 6.688 5.195 1.045 0.411 0.007]
16: 0.0236181 [37.246 25.552 17.477 11.644 10.069 6.689 5.194 1.045 0.411 0.007]
17: 0.0204227 [37.246 25.552 17.477 11.745 9.969 6.69 5.194 1.045 0.411 0.007]
18: 0.0173360 [37.246 25.552 17.476 11.817 9.896 6.69 5.193 1.045 0.411 0.007]
19: 0.0145195 [37.246 25.552 17.476 11.868 9.845 6.69 5.193 1.045 0.411 0.007]
20: 0.0120459 [37.246 25.552 17.476 11.903 9.81 6.69 5.193 1.045 0.411 0.007]
21: 0.0099285 [37.246 25.552 17.476 11.927 9.787 6.69 5.193 1.045 0.411 0.007]
22: 0.0081469 [37.246 25.552 17.476 11.943 9.77 6.691 5.193 1.045 0.411 0.007]
23: 0.0066649 [37.246 25.552 17.476 11.954 9.76 6.691 5.193 1.045 0.411 0.007]
24: 0.0054416 [37.246 25.552 17.476 11.961 9.753 6.691 5.193 1.045 0.411 0.007]
25: 0.0044368 [37.246 25.552 17.476 11.966 9.748 6.691 5.193 1.045 0.411 0.007]
26: 0.0036144 [37.246 25.552 17.476 11.969 9.745 6.691 5.193 1.045 0.411 0.007]
27: 0.0029426 [37.246 25.552 17.476 11.971 9.743 6.691 5.193 1.045 0.411 0.007]
28: 0.0023948 [37.246 25.552 17.476 11.972 9.741 6.691 5.193 1.045 0.411 0.007]
29: 0.0019485 [37.246 25.552 17.476 11.973 9.74 6.691 5.193 1.045 0.411 0.007]
30: 0.0015850 [37.246 25.552 17.476 11.974 9.74 6.691 5.193 1.045 0.411 0.007]
31: 0.0012893 [37.246 25.552 17.476 11.974 9.739 6.691 5.193 1.045 0.411 0.007]
32: 0.0010486 [37.246 25.552 17.476 11.974 9.739 6.691 5.193 1.045 0.411 0.007]
33: 0.0008528 [37.246 25.552 17.476 11.975 9.739 6.691 5.193 1.045 0.411 0.007]
34: 0.0006936 [37.246 25.552 17.476 11.975 9.739 6.691 5.193 1.045 0.411 0.007]
35: 0.0005641 [37.246 25.552 17.476 11.975 9.739 6.691 5.193 1.045 0.411 0.007]
36: 0.0004587 [37.246 25.552 17.476 11.975 9.739 6.691 5.193 1.045 0.411 0.007]
37: 0.0003730 [37.246 25.552 17.476 11.975 9.739 6.691 5.193 1.045 0.411 0.007]
38: 0.0003034 [37.246 25.552 17.476 11.975 9.739 6.691 5.193 1.045 0.411 0.007]
39: 0.0002467 [37.246 25.552 17.476 11.975 9.738 6.691 5.193 1.045 0.411 0.007]
40: 0.0002006 [37.246 25.552 17.476 11.975 9.738 6.691 5.193 1.045 0.411 0.007]
41: 0.0001632 [37.246 25.552 17.476 11.975 9.738 6.691 5.193 1.045 0.411 0.007]
42: 0.0001327 [37.246 25.552 17.476 11.975 9.738 6.691 5.193 1.045 0.411 0.007]
43: 0.0001079 [37.246 25.552 17.476 11.975 9.738 6.691 5.193 1.045 0.411 0.007]
44: 0.0000878 [37.246 25.552 17.476 11.975 9.738 6.691 5.193 1.045 0.411 0.007]
45: 0.0000714 [37.246 25.552 17.476 11.975 9.738 6.691 5.193 1.045 0.411 0.007]
46: 0.0000580 [37.246 25.552 17.476 11.975 9.738 6.691 5.193 1.045 0.411 0.007]
47: 0.0000472 [37.246 25.552 17.476 11.975 9.738 6.691 5.193 1.045 0.411 0.007]
48: 0.0000384 [37.246 25.552 17.476 11.975 9.738 6.691 5.193 1.045 0.411 0.007]
49: 0.0000312 [37.246 25.552 17.476 11.975 9.738 6.691 5.193 1.045 0.411 0.007]
50: 0.0000254 [37.246 25.552 17.476 11.975 9.738 6.691 5.193 1.045 0.411 0.007]
51: 0.0000206 [37.246 25.552 17.476 11.975 9.738 6.691 5.193 1.045 0.411 0.007]
52: 0.0000168 [37.246 25.552 17.476 11.975 9.738 6.691 5.193 1.045 0.411 0.007]
53: 0.0000137 [37.246 25.552 17.476 11.975 9.738 6.691 5.193 1.045 0.411 0.007]
54: 0.0000111 [37.246 25.552 17.476 11.975 9.738 6.691 5.193 1.045 0.411 0.007]
55: 0.0000090 [37.246 25.552 17.476 11.975 9.738 6.691 5.193 1.045 0.411 0.007]
A =
[[ 3.954 -0.358 -3.121 -6.618 -3.274 1.398 -2.821 1.434 -2.253 3.112]
[-0.358 20.248 -1.956 -4.728 0.624 0.404 0.952 -6.958 2.52 8.028]
[-3.121 -1.956 6.66 4.984 2.415 -0.112 3.373 -3.785 0.944 -6.602]
[-6.618 -4.728 4.984 17.463 5.068 -0.346 1.095 1.848 1.062 -7.427]
[-3.274 0.624 2.415 5.068 10.343 3.806 3.509 0.946 -0.511 -2.276]
[ 1.398 0.404 -0.112 -0.346 3.806 12.151 -1.023 5.247 -2.123 -1.104]
[-2.821 0.952 3.373 1.095 3.509 -1.023 7.138 -2.119 4.115 -2.595]
[ 1.434 -6.958 -3.785 1.848 0.946 5.247 -2.119 14.314 -0.808 -2.685]
[-2.253 2.52 0.944 1.062 -0.511 -2.123 4.115 -0.808 7.811 3.147]
[ 3.112 8.028 -6.602 -7.427 -2.276 -1.104 -2.595 -2.685 3.147 15.251]]
U =
[[-0.18 -0.297 0.043 -0.153 -0.025 0.023 0.051 -0.014 0.605 -0.696]
[-0.504 0.445 -0.434 -0.079 0.241 -0.473 -0.236 0.061 0.107 -0.019]
[ 0.245 0.298 0.076 -0.281 -0.122 -0.049 0.311 0.579 0.474 0.302]
[ 0.514 0.302 -0.102 0.426 0.523 -0.034 0.176 -0.255 0.24 -0.155]
[ 0.205 0.21 -0.491 -0.04 -0.115 0.6 -0.39 0.281 -0.081 -0.245]
[ 0.077 -0.233 -0.663 -0.274 -0.08 -0.043 0.597 -0.204 -0.136 0.03 ]
[ 0.09 0.315 -0.04 -0.004 -0.62 -0.008 -0.153 -0.594 0.33 0.15 ]
[ 0.224 -0.523 -0.319 0.406 -0.159 -0.351 -0.329 0.218 0.234 0.225]
[-0.062 0.246 0.022 0.497 -0.47 -0.218 0.331 0.273 -0.269 -0.403]
[-0.528 -0.041 -0.097 0.47 0.059 0.489 0.253 0.006 0.281 0.321]]
D =
[37.246 25.552 17.476 11.975 9.738 6.691 5.193 1.045 0.411 0.007]
Reconstruction error = 0.0001375
Orthogonality error = 0.0000000
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment