Math 1410, Spring 2020

Operations on Matrices

Sean Fitzpatrick
University of Lethbridge

Recap

Warm-Up

Determine the matrix transformation that:

  1. Reflects across the \(y\) axis, stretches vertically by a factor of 3, and then rotates by \(45^\circ\text{.}\)

  2. Reflects across the line \(y=x\text{,}\) stretches vertically be a factor of 3, then reflects across the \(y\) axis.

Transpose

Transpose of a matrix

  • The transpose of a matrix swaps its rows and columns.

  • If \(A=[a_{ij}]\) is an \(m\times n\) matrix, then \(A^T\) is the \(n\times m\) matrix whose \((i,j)\) entry is \(a_{ji}\text{.}\)

Examples: on the board.

Why transpose?

Why should we care about transpose?

  • Admittedly, it's one of several things we teach you to compute in Math 1410, and make you wait until later courses to really understand.

  • Later, it gets related to dual vectors and dual operators, and is important in things like quantum mechanics.

  • It gives us an easy way to turn columns into rows and vice versa: sometimes we write \(\bbm 1\amp 2\amp 3\ebm^T\) instead of \(\bbm 1\\2\\3\ebm\) because it fits better on the page.

  • It also connects with the dot product: we can write \(\vec{v}\cdot\vec{w} = (\vec{v})^T\vec{w}\text{.}\)

Properties

In each case, assume that \(A\) and \(B\) have the right size for the operation to be defined.

(True if both are \(n\times n\) but things work for non-square matrices too.)

  1. \((A+B)^T = A^T+B^T\)

  2. \((kA)^T = kA^T\)

  3. \((AB)^T=B^TA^T\)

  4. \((A^T)^T=A\)

  5. \((A^T)^{-1}=(A^{-1})^T\text{,}\) if \(A\) is invertible.

Symmetric and antisymmetric matrices

An \(n\times n\) matrix \(A\) is symmetric if

\begin{equation*} A^T = A \end{equation*}
and antisymmetric if
\begin{equation*} A^T=-A\text{.} \end{equation*}

What can we say about the entries of \(A\) in each case?

Constructing symmetric matrices

  1. Show that \(A+A^T\) is symmetric, and \(A-A^T\) is antisymmetric, for any \(n\times n\) matrix \(A\)

  2. Show that \(AA^T\) and \(A^TA\) are symmetric for any matrix \(A\text{.}\)

  3. Show that any square matrix \(A\) can be written as the sum of a symmetric and an antisymmetric matrix.

Trace

Trace of a matrix

The trace of a matrix is simply the sum of its diagonal entries.

If \(A = [a_{ij}]\) is an \(n\times n\) matrix, then

\begin{equation*} \tr(A) = \sum_{k=1}^n a_{kk} = a_{11}+a_{22}+\cdots + a_{nn}\text{.} \end{equation*}

(If \(A\) is \(m\times n\) with \(m\neq n\text{,}\) sum to whichever of \(m,n\) is smaller.)

Examples? On the board!

Properties of the trace

  1. \(\tr(A+B) = \tr(A)+\tr(B)\)

  2. \(\tr(kA) = k\tr(A)\)

  3. \(\tr(AB)=\tr(BA)\) (as long as both products are defined)

  4. \(\tr(A^T)=\tr(A)\)

Note: on the set of all \(m\times n\) matrices, the pairing

\begin{equation*} \langle A,B\rangle =\tr(B^T A) \end{equation*}
defines a sort of “dot product”.

Determinants

Introduction

The determinant is a function that assigns a number to any square matrix \(A\text{.}\)

We denote this number by \(\det A\) or \(\abs{A}\text{.}\)

We'll define \(\det A\) recursively, starting with \(2\times 2\) matrices, and then showing how to reduce the determinant of a larger matrix to smaller ones.

There is a general formula, but it's........ complicated.

The \(2\times 2\) case

You've already learned how to compute these, back when we did cross products!

Given \(A=\bbm a\amp c\\b\amp d\ebm\text{,}\) \(\det A = ad-bc\text{:}\)

\begin{equation*} \bvm a\amp c\\b\amp d\evm = ad-bc\text{.} \end{equation*}

Note:

  1. If \(\vec{v}=\bbm a\\b\ebm\) is parallel to \(\vec{v}\bbm c\\d\ebm\text{,}\) then \(\det A = 0\text{.}\)

  2. Otherwise, \(\det A\) calculates (up to sign) the area of the parallelogram spanned by \(\vec v\) and \(\vec w\text{.}\)

The \(3\times 3\) case

This is also not a big leap from cross products:

\begin{equation*} \bvm a_1 \amp a_2\amp a_3\\b_1\amp b_2\amp b_3\\c_1\amp c_2\amp c_3\evm = a_1\bvm b_2\amp b_3\\c_2\amp c_3\evm - a_2\bvm b_1\amp b_3\\c_1\amp c_3\evm + a_3\bvm b_1\amp b_2\\c_1\amp c_2\evm\text{.} \end{equation*}

You might recall that this is the same as the scalar triple product \(\vec{a}\cdot (\vec{b}\times \vec{c})\text{.}\)

Example: compute the determinant of \(A = \bbm 2\amp -1\amp 3\\0\amp 4\amp 1\\-1\amp 0\amp 5\ebm\text{.}\)

Minors and Cofactors

Given an \(n\times n\) matrix \(A = [a_{ij}]\text{,}\)

  • The \((i,j)\) minor of \(A\) is the \((n-1)\times (n-1)\) matrix \(M_{ij}\) obtained by deleting row \(i\) and column \(j\) of \(A\)

  • The \((i,j)\) cofactor of \(A\) is the number \(C_{ij}\) defined by

    \begin{equation*} C_{ij} = (-1)^{i+j}\det M_{ij}\text{.} \end{equation*}

Note: \((-1)^{i+j}\) equals \(+1\) if \(i+j\) is even, and \(-1\) if \(i+j\) is odd.

(Now do some examples, Sean)

Next steps

  • In general, we define \(\det A = a_{11}C_{11}+a_{12}C_{12}+\cdots + a_{1n}C_{1n}\) (cofactor expansion along first row). (Each cofactor \(C_{1j}\) is a determiant one smaller than \(A\))

  • Then we prove (OK, state assertively) that we can actually expand any row or column of \(A\text{.}\)

  • Then we'll observe that determinants of triangular matrices are really easy.

  • Finally, we'll see what happens to the determinant if we use row operations to get a matrix into triangular form.