1.4 Matrix Operations


: 30 minutes

Just like vectors, matrices also have their own operations.

Transpose

Let \bold{A}\in\mathbb{R}^{m\times n} be matrix. The transpose, denoted \bold{A}^T, is a matrix of dimension n\times m having the rows of \bold{A} as its columns and the columns of \bold{A} as its rows.

In other words, if \bold{B}=\bold{A}^T, then b_{ij}=a_{ji} for all i and j.

A square matrix \bold{A} is called symmetric if \bold{A}^T=A and called skew-symmetric if \bold{A}^T=-A.

Note Skew-symmetry

Can the last diagonal element of a skew-symmetric matrix be -5?

For a skew-symmetric matrix, all the diagonal elements must be zero.

Addition (and Subtraction)

Two matrices can be added and subtracted only if their sizes match. The addition is element-wise addition, and the resulting matrix is also of the same length. \bold{U}+\bold{V} =\begin{bmatrix}u_{11}&u_{12}&\ldots&u_{1n}\\ u_{21}&u_{22}&\ldots&u_{2n}\\ \vdots&\vdots&\ddots&\vdots \\ u_{m1}&u_{m2}&\ldots&u_{mn}\end{bmatrix} + \begin{bmatrix}v_{11}&v_{12}&\ldots&uv_{1n}\\ v_{21}&v_{22}&\ldots&v_{2n}\\ \vdots&\vdots&\ddots&\vdots \\ v_{m1}&v_{m2}&\ldots&v_{mn}\end{bmatrix} =\begin{bmatrix}u_{11}+v_{11}&u_{12}+v_{12}&\ldots&u_{1n}+v_{1n}\\ u_{21}+v_{21}&u_{22}+v_{22}&\ldots&u_{2n}+v_{2n}\\ \vdots&\vdots&\ddots&\vdots \\ u_{m1}+v_{m1}&u_{m2}+v_{m2}&\ldots&u_{mn}+v_{mn}\end{bmatrix} .

Multiplication by a Scalar

For a scalar k\in\R, the scalar multiplication of a vector \bold{u}\in\R^n by k is element-wise scalar multiplication by k, and the outcome is denoted by k\bold{u}. k\bold{U} =k\begin{bmatrix}u_{11}&u_{12}&\ldots&u_{1n}\\ u_{21}&u_{22}&\ldots&u_{2n}\\ \vdots&\vdots&\ddots&\vdots \\ u_{m1}&u_{m2}&\ldots&u_{mn}\end{bmatrix} =\begin{bmatrix}ku_{11}&ku_{12}&\ldots&ku_{1n}\\ ku_{21}&ku_{22}&\ldots&ku_{2n}\\ \vdots&\vdots&\ddots&\vdots \\ ku_{m1}&ku_{m2}&\ldots&ku_{mn}\end{bmatrix} .

Matrix-Matrix Multiplication

Matrix multiplication is only defined when the number of columns in the first matrix equals the number of rows in the second matrix. For two matrices \bold{A}_{m \times n} and \bold{B}_{n \times p}, the product \bold{C} = \bold{AB} is an m \times p matrix where: c_{ij} = \sum_{k=1}^{n} a_{ik} \cdot b_{kj}

Example

Let us now multiply the following two matrices.

\bold{A}= \begin{bmatrix} 2 & 3 & 1 \\ 4 & 0 & 5 \\ 1 & 2 & 3 \end{bmatrix}\text{ and } \bold{B}= \begin{bmatrix} 1 & 2 \\ 3 & 1 \\ 2 & 4 \end{bmatrix}.

The product \bold{C} = \bold{AB} is calculated as follows. For a 3 \times 3 matrix multiplied by a 3 \times 2 matrix, the result is a 3 \times 2 matrix.

Each element c_{ij} of the result matrix \bold{C} is computed as: c_{ij} = \sum_{k=1}^{3} a_{ik} \cdot b_{kj}

First row of : \begin{align} c_{11} &= (2)(1) + (3)(3) + (1)(2) = 2 + 9 + 2 = 13 \\ c_{12} &= (2)(2) + (3)(1) + (1)(4) = 4 + 3 + 4 = 11 \end{align}

Second row of \bold{C}: \begin{align} c_{21} &= (4)(1) + (0)(3) + (5)(2) = 4 + 0 + 10 = 14 \\ c_{22} &= (4)(2) + (0)(1) + (5)(4) = 8 + 0 + 20 = 28 \end{align}

Third row of \bold{C}: \begin{align} c_{31} &= (1)(1) + (2)(3) + (3)(2) = 1 + 6 + 6 = 13 \\ c_{32} &= (1)(2) + (2)(1) + (3)(4) = 2 + 2 + 12 = 16 \end{align}

Therefore, the product matrix is: \begin{equation} \bold{C} = \bold{AB} = \begin{bmatrix} 13 & 11 \\ 14 & 28 \\ 13 & 16 \end{bmatrix} \end{equation}

Linear Transformation

A transformation (function or mapping) T : \mathbb{R}^n \rightarrow \mathbb{R}^m denoted T from \mathbb{R}^n to \mathbb{R}^m is a rule that assigns to each vector \bold{x} \in \mathbb{R}^n a vector T(\bold{x}) \in \mathbb{R}^m.

Matrices serve as linear transformations

T(\bold{x}) = A\bold{x}

The set \mathbb{R}^n is called the domain of T, and the set \mathbb{R}^m is called the codomain of T.

Assume a matrix A with dimensions m x n, we can compute the dot product or matrix multiplication by

A\bold{x} = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix} = \begin{bmatrix} a_{11}x_1 + a_{12}x_2 + \cdots + a_{1n}x_n \\ a_{21}x_1 + a_{22}x_2 + \cdots + a_{2n}x_n \\ \vdots \\ a_{m1}x_1 + a_{m2}x_2 + \cdots + a_{mn}x_n \end{bmatrix} = \bold{b}

A\bold{x} = \bold{b}

To perform the matrix multiplication A\bold{x}, the number of columns in A must match the number of entries in the vector \bold{x}.
That is, if A is an m \times n matrix and \bold{x} is an n \times 1 column vector, the multiplication is valid and the result will be an m \times 1 column vector.
If the dimensions do not align, the dot product is undefined.

Inverse of a 2×2 Matrix

The inverse of a matrix is like the “undo” button for linear transformations. When you multiply a matrix by its inverse, you get the identity matrix— which acts like the number 1 for matrices.

For a matrix
\bold{A} = \begin{bmatrix} a & b \\ c & d \end{bmatrix}

the inverse \bold{A}^{-1} is defined as

\bold{A}^{-1} = \frac{1}{\det(A)} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}, where the determinant is
\det(\bold{A}) = ad - bc

\bold{A} must be invertible, meaning \det(\bold{A}) \neq 0.

If \det(A) = 0, the inverse does not exist (the matrix is singular).