Skip to content

Instantly share code, notes, and snippets.

@tobydriscoll
Created August 30, 2016 21:22
Show Gist options
  • Save tobydriscoll/8aa30fdad0346f1c5656ff4a468b1b05 to your computer and use it in GitHub Desktop.
Save tobydriscoll/8aa30fdad0346f1c5656ff4a468b1b05 to your computer and use it in GitHub Desktop.
TB-Lecture-02
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Lecture 2: Orthogonal vectors and matrices"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"With real vectors and matrices, the transpose operation is simple and familiar. It also happens to correspond to what we call the **adjoint** mathematically. In the complex case, one also has to conjugate the entries to keep the mathematical structure intact. We call this operator the **hermitian** of a matrix and use a star superscript for it."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"A =\n",
"\n",
" 0.8147 + 0.9575i 0.1270 + 0.1576i 0.6324 + 0.9572i 0.2785 + 0.8003i\n",
" 0.9058 + 0.9649i 0.9134 + 0.9706i 0.0975 + 0.4854i 0.5469 + 0.1419i\n"
]
}
],
"source": [
"A = rand(2,4) + 1i*rand(2,4)"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Aadjoint =\n",
"\n",
" 0.8147 - 0.9575i 0.9058 - 0.9649i\n",
" 0.1270 - 0.1576i 0.9134 - 0.9706i\n",
" 0.6324 - 0.9572i 0.0975 - 0.4854i\n",
" 0.2785 - 0.8003i 0.5469 - 0.1419i\n"
]
}
],
"source": [
"Aadjoint = A'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To get plain transpose, use a `.^` operator."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Atrans =\n",
"\n",
" 0.8147 + 0.9575i 0.9058 + 0.9649i\n",
" 0.1270 + 0.1576i 0.9134 + 0.9706i\n",
" 0.6324 + 0.9572i 0.0975 + 0.4854i\n",
" 0.2785 + 0.8003i 0.5469 + 0.1419i\n"
]
}
],
"source": [
"Atrans = A.'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Inner products"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If **u** and **v** are column vectors of the same length, then their **inner product** is $\\mathbf{u}^*\\mathbf{v}$. The result is a scalar."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"u =\n",
"\n",
" 4.0000 + 0.0000i\n",
" -1.0000 + 0.0000i\n",
" 2.0000 + 2.0000i\n",
"\n",
"\n",
"v =\n",
"\n",
" -1.0000 + 0.0000i\n",
" 0.0000 + 1.0000i\n",
" 1.0000 + 0.0000i\n",
"\n",
"\n",
"innerprod =\n",
"\n",
" -2.0000 - 3.0000i\n"
]
}
],
"source": [
"u = [ 4; -1; 2+2i ], v = [ -1; 1i; 1 ], \n",
"innerprod = u'*v"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The inner product has geometric significance. It is used to define length through the 2-norm,"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"length_u_squared =\n",
"\n",
" 25\n"
]
}
],
"source": [
"length_u_squared = u'*u"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"ans =\n",
"\n",
" 25\n"
]
}
],
"source": [
"sum( abs(u).^2 )"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"norm_u =\n",
"\n",
" 5\n"
]
}
],
"source": [
"norm_u = norm(u)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It also defines the angle between two vectors as a generalization of the familiar dot product."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"cos_theta =\n",
"\n",
" -0.2309 - 0.3464i\n"
]
}
],
"source": [
"cos_theta = (u'*v) / ( norm(u)*norm(v) )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The angle may be complex when the vectors are complex! "
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"theta =\n",
"\n",
" 1.7902 + 0.3479i\n"
]
}
],
"source": [
"theta = acos(cos_theta)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The operations of inverse and hermitian commute."
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"ans =\n",
"\n",
" 2.4392 + 2.1789i 4.2203 - 2.1410i -5.6430 - 3.9836i -1.0386 + 4.9812i\n",
" -0.1291 - 0.5146i -1.8260 - 1.0582i -1.0840 + 3.6489i 3.6179 - 1.7832i\n",
" -3.0514 - 0.3801i -0.9192 + 3.9355i 6.9683 - 0.4530i -2.9434 - 3.8565i\n",
" -0.4551 - 0.0746i -0.7963 + 1.3118i 1.9624 - 0.7105i -0.6719 - 0.2405i\n"
]
}
],
"source": [
"A = rand(4,4)+1i*rand(4,4); (inv(A))'"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"ans =\n",
"\n",
" 2.4392 + 2.1789i 4.2203 - 2.1410i -5.6430 - 3.9836i -1.0386 + 4.9812i\n",
" -0.1291 - 0.5146i -1.8260 - 1.0582i -1.0840 + 3.6489i 3.6179 - 1.7832i\n",
" -3.0514 - 0.3801i -0.9192 + 3.9355i 6.9683 - 0.4530i -2.9434 - 3.8565i\n",
" -0.4551 - 0.0746i -0.7963 + 1.3118i 1.9624 - 0.7105i -0.6719 - 0.2405i\n"
]
}
],
"source": [
"inv(A')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"So we just write $\\mathbf{A}^{-*}$ for either case. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Orthogonality"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Orthogonality, which is the multidimensional extension of perpendicularity, means that $\\cos \\theta=0$, i.e., that the inner product between vectors is zero. A collection of vectors is orthogonal if they are all pairwise orthogonal. \n",
"\n",
"Don't worry about how we are creating the vectors here for now."
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Q =\n",
"\n",
" -0.5813 0.1775 0.0673\n",
" -0.3501 -0.4777 0.2848\n",
" -0.0651 -0.3952 -0.9041\n",
" -0.1779 -0.7018 0.2561\n",
" -0.7097 0.3025 -0.1769\n"
]
}
],
"source": [
"[Q,~] = qr(rand(5,3),0)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Since $\\mathbf{Q}^*\\mathbf{Q}$ is a matrix of all inner products between columns of $\\mathbf{Q}$, those columns are orthogonal if and only if that matrix is diagonal."
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"QhQ =\n",
"\n",
" 1.0000 0.0000 0.0000\n",
" 0.0000 1.0000 0.0000\n",
" 0.0000 0.0000 1.0000\n"
]
}
],
"source": [
"QhQ = Q'*Q"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In fact we have a stronger condition here: the columns are **orthonormal**, meaning that they are orthogonal and each has 2-norm equal to 1. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Given any other vector of length 5, we can compute its inner product with each of the columns of $\\mathbf{Q}$. "
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"c =\n",
"\n",
" -0.8950\n",
" -0.4719\n",
" -0.6467\n"
]
}
],
"source": [
"u = rand(5,1); c = Q'*u"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can then use these coefficients to find a vector in the column space of $\\mathbf{Q}$."
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"v =\n",
"\n",
" 0.3930\n",
" 0.3546\n",
" 0.8295\n",
" 0.3248\n",
" 0.6068\n"
]
}
],
"source": [
"v = Q*c"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As explained in the text, $\\mathbf{r} = \\mathbf{u}-\\mathbf{v}$ is orthogonal to all of the columns of $\\mathbf{Q}$."
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"ans =\n",
"\n",
" 1.0e-15 *\n",
"\n",
" -0.3608\n",
" 0.1110\n",
" 0.2776\n"
]
}
],
"source": [
"r = u-v; Q'*r"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Consequently, we have decomposed $\\mathbf{u}=\\mathbf{v}+\\mathbf{r}$ into the sum of two orthogonal parts, one lying in the range of $\\mathbf{Q}$. "
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"ans =\n",
"\n",
" 1.3184e-16\n"
]
}
],
"source": [
"v'*r"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Unitary matrices"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We just saw that a matrix whose columns are orthonormal is pretty special. It becomes even more special if the matrix is also square, in which case we call it **unitary**. (In the real case, such matrices are confusingly called _orthogonal_. Ugh.) Say $\\mathbf{Q}$ is unitary and $m\\times m$. Then $\\mathbf{Q}^*\\mathbf{Q}$ is an $m\\times m$ identity matrix---that is, $\\mathbf{Q}^*=\\mathbf{Q}^{-1}$! It can't get much easier in terms of finding the inverse of a matrix. "
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"ans =\n",
"\n",
" 1.0e-15 *\n",
"\n",
" 0.0555 0.1618 0.1570 0.0555 0.1144\n",
" 0.1241 0.1127 0.0416 0.1001 0.0191\n",
" 0.0964 0.2783 0.1144 0.0416 0.0878\n",
" 0.2483 0.0555 0.0785 0.2355 0.1388\n",
" 0.0747 0.1144 0.1430 0 0.2289\n"
]
}
],
"source": [
"[Q,~] = qr(rand(5,5)+1i*rand(5,5));\n",
"abs( inv(Q) - Q' )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The rank of $\\mathbf{Q}$ is $m$, so continuing the discussion above, the original vector $\\mathbf{u}$ lies in its column space. Hence the remainder $\\mathbf{r}=\\boldsymbol{0}$. "
]
},
{
"cell_type": "code",
"execution_count": 32,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"r =\n",
"\n",
" 1.0e-15 *\n",
"\n",
" 0.0000 + 0.0647i\n",
" 0.0555 + 0.0625i\n",
" 0.2220 - 0.0640i\n",
" 0.2498 + 0.0499i\n",
" 0.1665 - 0.0122i\n"
]
}
],
"source": [
"c = Q'*u; \n",
"v = Q*c;\n",
"r = u - v"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is another way to arrive at a fact we already knew: Multiplication by $\\mathbf{Q}^*=\\mathbf{Q}^{-1}$ changes the basis to the columns of $\\mathbf{Q}$."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Matlab",
"language": "matlab",
"name": "matlab"
},
"language_info": {
"codemirror_mode": "octave",
"file_extension": ".m",
"help_links": [
{
"text": "MetaKernel Magics",
"url": "https://github.com/calysto/metakernel/blob/master/metakernel/magics/README.md"
}
],
"mimetype": "text/x-octave",
"name": "matlab",
"version": "0.11.0"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment