Skip to content

Instantly share code, notes, and snippets.

@ialexpovad
Last active June 30, 2021 15:10
Show Gist options
  • Save ialexpovad/f7bd9b7b947db5ed7ba3efe2d2cfe995 to your computer and use it in GitHub Desktop.
Save ialexpovad/f7bd9b7b947db5ed7ba3efe2d2cfe995 to your computer and use it in GitHub Desktop.
Matrix inversion without NumPy/SciPy
# Program for determining the inverse matrix and solve
# system linear equation without using additional modules NumPy / SciPy
import sys
def inv(A):
'''
The function returns the inverse of the passed in matrix.
– parametr A: The matrix to be inversed.
– return: The inverse of the matrix A.
'''
# Initializing the main values.
n = len(A) # Size of matrix A (number of rows)
m = len(A[0])
# Initializang identity matrix.
I = [[0] * n for _ in range(n)]
for item in range(n):
I[item][item] = 1.0
# Copy matricies.
apeA = A
apeI = I
# Make sure that the multiplicative inverse can be found for the matrix A. This is, A • A^(-1) = I.
diffA = A # additional matrix A.
# Checking matrix A is square.
if len(A) != len(A[0]):
raise ArithmeticError('Matrix must be square to inverse.')
sys.exit()
for focusDiagonal in range(n):
# if diagonal is zero components change to approx zero
if diffA[focusDiagonal][focusDiagonal] == 0:
diffA[focusDiagonal][focusDiagonal] = 1.0e-16
for i in range(focusDiagonal+1, n):
numberCurrentRow = diffA[i][focusDiagonal] / diffA[focusDiagonal][focusDiagonal]
for j in range(n):
diffA[i][j] = diffA[i][j] - numberCurrentRow * diffA[focusDiagonal][j]
detValue = 1.0
for i in range(n):
detValue = detValue * diffA[i][i]
if detValue == 0:
raise ArithmeticError("Singular Matrix. Matrix A – degenerate!")
sys.exit()
# Perform row operations.
index = list(range(n))
for focusDiagonal in range(n):
scaler_focusDiagonal = 1.0 / apeA[focusDiagonal][focusDiagonal]
for j in range(n):
apeA[focusDiagonal][j] = apeA[focusDiagonal][j] * scaler_focusDiagonal
apeI[focusDiagonal][j] = apeI[focusDiagonal][j] * scaler_focusDiagonal
for i in index[0:focusDiagonal]+index[focusDiagonal+1:]:
numberCurrentRow = apeA[i][focusDiagonal]
for j in range(n):
apeA[i][j] = apeA[i][j] - numberCurrentRow * apeA[focusDiagonal][j]
apeI[i][j] = apeI[i][j] - numberCurrentRow * apeI[focusDiagonal][j]
# Pretty print matrix
for item_1 in apeI:
print([round(item_2,3) for item_2 in item_1])
return apeI
if __name__ == '__main__':
None

Matrix algebra and matrix calculus | Inverse matrices

Intro

The need for an inverse matrix. Consider a typical algebraic equation:

$$ Ax = b, $$

$$ \begin {bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{bmatrix} \begin{bmatrix} x_{11} \ x_{21} \ x_{31} \end{bmatrix} = \begin{bmatrix} b_{11} \ b_{21} \ b_{31} \end{bmatrix}. $$

We want to solve for $x$, so obtain the inverse of $A$ and do the following:

$$ x = A^{-1} b, $$

$$ \begin{bmatrix} x_{11} \ x_{21} \ x_{31} \end{bmatrix} = \begin{bmatrix} ai_{11} & ai_{12} & ai_{13} \\ ai_{21} & ai_{22} & ai_{23} \\ ai_{31} & ai_{32} & ai_{33} \end{bmatrix} \begin{bmatrix} b_{11} \ b_{21} \ b_{31} \end{bmatrix}.$$

So, have a motive to find $A^{-1}$.

Definition 1

A square matrix $A$ is called nonsingular (or non-degenerate ) if it has a (necessarily unique) multiplicatively inverse or simply inverse matrix $A^{-1}$ defined by the conditions $AA^{-1}=A^{-1}A=I$. Otherwise $A$ singular (degenerate).

A square matrix $A\equiv [a_{ij}]$ of order $n$ nonsingular if and only if $\det(A)\equiv \det[a_{ij}] \ne 0$; in this case $A^{-1}$, there is the square matrix of the same order $n$:

$$A^{-1}\equiv [a_{ij}]^{-1} \equiv \left[ \frac{A_{ji}}{\det[a_{ij}]}\right ],$$

where $A_{ji}$ – algebraic addition of an element $[a_{ij}]$ in the determinant $\det[a_{ij}]$. A square matrix is not-degenerate if and only if its rows (columns) are linearly independent.

In common

Given a system of equations, write the coefficient matrix $A$, the variable matrix $x$, and the constant matrix $b$. Then:

$$ Ax = b. $$

Multiply both sides by the inverse of $A$ to obtain the solution:

$$ (A^{-1}) A x = (A^{-1})b, $$

$$ \left[(A^{-1}) A \right] x = (A^{-1})b, $$

$$ Ix = (A^{-1})b, $$

$$ x = (A^{-1})b .$$

The steps $S$ that must be performed to do this for a matrix of any size. It is necessary to think of the inverse matrix method as a set of steps for each column from left to right and for each element in the current column, and in each column there is one of the diagonal elements, which are represented as diagonal elements $S_{k1},$ where $k=1 \to n.$ And each $S$ represents an element that is used for scaling. When we are at a certain step $S_{ij},$ where $i$ and $j=1$ to $n$ regardiess of where we are in the matrix, we perform this step over the entire row amd use the row with the diagonal $S_{k1}$ in it as part of this operation:

$$ S = \begin{bmatrix} S_{11} & ... & ... & S_{k2} & ... & ... & S_{n2} \\ S_{12} & ... & ... & S_{k3} & ... & ... & S_{n3} \\ . & & & . & & & .\\ S_{1k} & ... & ... & S_{k1} & ... & ... & S_{nk} \\ . & & & . & & & .\\ S_{1n-1} & ... & ... & S_{kn-1} & ... & ... & S_{nn-1} \\ S_{1} & ... & ... & S_{kn} & ... & ... & S_{n1} \end{bmatrix}.$$

Algoritm for coding

Let current diagonal element the focus diagonal element (focusDiagonal). The firs step $S_{k1}$ for each column is to multiply the row that has focusDiagonal in it by 1/(focusDiagonal). We then operate on the remaining rows ($S_{k2}$ to $S_{kn}$) the ones without focusDiagonal in them, as follow:

  • use the element that is in the same column as focusDiagonal and make it a multiplier;
  • replace the row with the result of [current row] – multiplier • [row that has focusDiagonal], and do this operation to the $I$ matrix also.
  • this will leave a zero in the column shared by focusDiagonal in the $A$.

This is done for all cilumns from left to right in both $A$ and $I$ matricies. When this is completed, $A$ will become the identity matrix, and $I$ will become the inverse of $A$.

Example

Our starting matricies are:

$$A= \begin{bmatrix} 5 & 3 & 1 \\ 3 & 9 & 4 \\ 1 & 3 & 5 \end{bmatrix}, ~~ I= \begin{bmatrix} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}.$$

$A_M$ and $I_M$, are initially the same, as $A$ and $I$, resepectively:

$$A_M= \begin{bmatrix} 5 & 3 & 1 \\ 3 & 9 & 4 \\ 1 & 3 & 5 \end{bmatrix}, ~~ I_M= \begin{bmatrix} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}.$$

  1. Using the steps and methods that we just described, scale row 1 of both matricies by 1/5

    $$A_M= \begin{bmatrix} 1 & 0.6 & 0.2 \ 3 & 9 & 4 \ 1 & 3 & 5 \end{bmatrix}, ~~ I_M= \begin{bmatrix} 0.2 & 0 & 0 \ 1 & 1 & 0 \ 0 & 0 & 1 \end{bmatrix}.$$

  2. Subtract 3 • row 1 of $A_M$ from row 2 of $A_M$, and subtract 3 • row 1 of $I_M$ from row 2 of $I_M$

    $$A_M= \begin{bmatrix} 1 & 0.6 & 0.2 \ 0 & 7.2 & 3.4 \ 1 & 3 & 5 \end{bmatrix}, ~~ I_M= \begin{bmatrix} 0.2 & 0 & 0 \ -0.6 & 1 & 0 \ 0 & 0 & 1 \end{bmatrix}.$$

  3. Subtract 1 • row 1 of $A_M$ from row 3 of $A_M$, and subtract 1 • row 1 of $I_M$ from row 3 of $I_M$

    $$A_M= \begin{bmatrix} 1 & 0.6 & 0.2 \ 0 & 7.2 & 3.4 \ 0 & 2.4 & 4.8 \end{bmatrix}, ~~ I_M= \begin{bmatrix} 0.2 & 0 & 0 \ -0.6 & 1 & 0 \ -0.2 & 0 & 1 \end{bmatrix}.$$

  4. Scale row 2 of both matricies by 1/7.2

    $$A_M= \begin{bmatrix} 1 & 0.6 & 0.2 \ 0 & 1 & 0.472 \ 0 & 2.4 & 4.8 \end{bmatrix}, ~~ I_M= \begin{bmatrix} 0.2 & 0 & 0 \ -0.083 & 0.139 & 0 \ -0.2 & 0 & 1 \end{bmatrix}.$$

  5. Subtract 0.6 • row 2 of $A_M$ from row 1 of $A_M$, and subtract 0.6 • row 2 of $I_M$ from row 1 of $I_M$

    $$A_M= \begin{bmatrix} 1 & 0 & -0.083 \ 0 & 1 & 0.472 \ 0 & 2.4 & 4.8 \end{bmatrix}, ~~ I_M= \begin{bmatrix} 0.25 & -0.083 & 0 \ -0.083 & 0.139 & 0 \ -0.2 & 0 & 1 \end{bmatrix}.$$

  6. Subtract 2.4 • row 2 of $A_M$ from row 3 of $A_M$, and subtract 2.4 • row 2 of $I_M$ from row 3 of $I_M$

    $$A_M= \begin{bmatrix} 1 & 0 & -0.083 \ 0 & 1 & 0.472 \ 0 & 0 & 3.667 \end{bmatrix}, ~~ I_M= \begin{bmatrix} 0.25 & -0.083 & 0 \ -0.083 & 0.139 & 0 \ 0 & -0.333 & 1 \end{bmatrix}.$$

  7. Scale row 3 of both matricies by 1/3.667

    $$A_M= \begin{bmatrix} 1 & 0 & -0.083 \ 0 & 1 & 0.472 \ 0 & 0 & 1 \end{bmatrix}, ~~ I_M= \begin{bmatrix} 0.25 & -0.083 & 0 \ -0.083 & 0.139 & 0 \ 0 & -0.091 & 0.273 \end{bmatrix}.$$

  8. Subtract -0.083 • row 3 of $A_M$ from row 1 of $A_M$, and subtract -0.083 • row 3 of $I_M$ from row 1 of $I_M$

    $$A_M= \begin{bmatrix} 1 & 0 & 0 \ 0 & 1 & 0.472 \ 0 & 0 & 1 \end{bmatrix}, ~~ I_M= \begin{bmatrix} 0.25 & -0.091 & 0.023 \ -0.083 & 0.139 & 0 \ 0 & -0.091 & 0.273 \end{bmatrix}.$$

  9. Subtract 0.472 • row 3 of $A_M$ from row 2 of $A_M$, and subtract 0.472 • row 3 of $I_M$ from row 2 of $I_M$

    $$A_M= \begin{bmatrix} 1 & 0 & 0 \ 0 & 1 & 0 \ 0 & 0 & 1 \end{bmatrix}, ~~ I_M= \begin{bmatrix} 0.25 & -0.091 & 0.023 \ -0.083 & 0.182 & -0.129 \ 0 & -0.091 & 0.273 \end{bmatrix}.$$

It all looks good, but let’s perform a check of $A \cdot I_M = I.$

Footnotes

  1. Korn G.A., Korn T.M. Mathematical Handbook for Scientists and Engineers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment