Matrix Calculations Guide
A complete reference for all matrix operations — from basic arithmetic to advanced decompositions. Each section includes the rule, worked examples, and key properties.
1. What is a Matrix?
A matrix is a rectangular array of numbers arranged in rows and columns. An m×n matrix has m rows and n columns. Each entry is identified by its row index i and column index j, written as A[i][j] or A_{ij}.
Example of a 2×3 matrix (2 rows, 3 columns):
| 1 | 2 | 3 |
| 4 | 5 | 6 |
Special cases: a 1×n matrix is a row vector; an m×1 matrix is a column vector; an n×n matrix is a square matrix.
| 3 | 1 | 4 |
The above is a 1×3 row vector. Below is a 3×1 column vector:
| 2 |
| 7 |
| 1 |
2. Matrix Addition & Subtraction
Two matrices can be added or subtracted only if they have the same dimensions. Each entry of the result is the sum (or difference) of the corresponding entries.
Worked Example
Let A and B be:
| 1 | 2 |
| 3 | 4 |
| 5 | 6 |
| 7 | 8 |
Then A + B =
| 6 | 8 |
| 10 | 12 |
Properties: Addition is commutative (A + B = B + A) and associative ((A + B) + C = A + (B + C)).
3. Scalar Operations
Multiplying a matrix by scalar k means multiplying every element by k. Similarly, adding scalar k means adding k to every element.
Example: 3 × A
If A is:
| 1 | 2 |
| 3 | 4 |
Then 3A =
| 3 | 6 |
| 9 | 12 |
Key property: det(kA) = k^n · det(A) for an n×n matrix.
4. Matrix Multiplication
To multiply A (m×n) by B (n×p), the number of columns in A must equal the number of rows in B. The result is an m×p matrix. Each entry (AB)[i][j] is the dot product of the i-th row of A and the j-th column of B.
Worked 2×2 Example
Let A and B be:
| 1 | 2 |
| 3 | 4 |
| 5 | 6 |
| 7 | 8 |
Computing AB:
- (AB)[1][1] = 1·5 + 2·7 = 5 + 14 = 19
- (AB)[1][2] = 1·6 + 2·8 = 6 + 16 = 22
- (AB)[2][1] = 3·5 + 4·7 = 15 + 28 = 43
- (AB)[2][2] = 3·6 + 4·8 = 18 + 32 = 50
Result AB =
| 19 | 22 |
| 43 | 50 |
Properties: Associative — (AB)C = A(BC). Distributive — A(B+C) = AB + AC.
5. Hadamard (Element-wise) Product
The Hadamard product A ⊙ B multiplies corresponding elements of two matrices of the same size. Also called the Schur product.
Example with A and B:
| 1 | 2 |
| 3 | 4 |
| 5 | 0 |
| 2 | 3 |
A ⊙ B =
| 5 | 0 |
| 6 | 12 |
6. Kronecker Product
The Kronecker product A ⊗ B replaces each element A[i][j] with the scaled block A[i][j]·B. If A is m×n and B is p×q, the result is (mp)×(nq).
Example: Let A = [[1,2],[3,4]] and B = [[0,5],[6,7]]. Then A ⊗ B is the 4×4 matrix:
| 0 | 5 | 0 | 10 |
| 6 | 7 | 12 | 14 |
| 0 | 15 | 0 | 20 |
| 18 | 21 | 24 | 28 |
7. Matrix Transpose
The transpose of A is written A^T. Rows become columns: entry (i,j) of A becomes entry (j,i) of A^T. A 2×3 matrix becomes a 3×2 matrix.
Example
Original 2×3 matrix A:
| 1 | 2 | 3 |
| 4 | 5 | 6 |
Transpose A^T (3×2):
| 1 | 4 |
| 2 | 5 |
| 3 | 6 |
Properties: (A^T)^T = A. (AB)^T = B^T A^T. A matrix satisfying A = A^T is called symmetric.
8. Matrix Determinant
The determinant is a scalar associated with a square matrix. Geometrically, it represents the signed scaling factor of the linear transformation: the signed volume of the parallelepiped spanned by the row vectors. If det(A) = 0, the rows are linearly dependent and the transformation collapses space — the matrix is singular and has no inverse.
8.1 Determinant of a 2×2 Matrix
For a 2×2 matrix [[a,b],[c,d]], the determinant is ad − bc:
Example — find det of [[3,8],[4,6]]:
| 3 | 8 |
| 4 | 6 |
det = (3)(6) − (8)(4) = 18 − 32 = −14
8.2 Determinant of a 3×3 Matrix — Laplace (Cofactor) Expansion
Expand along the first row. Each term uses a 2×2 minor (delete the row and column of that entry) with alternating signs (+, −, +):
Where M_{ij} is the determinant of the 2×2 submatrix obtained by deleting row i and column j.
Worked Example: Compute det of a 3×3 Matrix
Let A be:
| 2 | 3 | 1 |
| 0 | 4 | 5 |
| 1 | 2 | 3 |
Expand along row 1. The three cofactors use the 2×2 minors:
M_{11} — delete row 1, col 1 — gives [[4,5],[2,3]]:
| 4 | 5 |
| 2 | 3 |
M_{11} = (4)(3) − (5)(2) = 12 − 10 = 2
M_{12} — delete row 1, col 2 — gives [[0,5],[1,3]]:
| 0 | 5 |
| 1 | 3 |
M_{12} = (0)(3) − (5)(1) = 0 − 5 = −5
M_{13} — delete row 1, col 3 — gives [[0,4],[1,2]]:
| 0 | 4 |
| 1 | 2 |
M_{13} = (0)(2) − (4)(1) = 0 − 4 = −4
Applying the signs (+, −, +):
8.3 Sarrus' Rule for 3×3 Matrices
Write the matrix and repeat the first two columns to the right. Sum the three downward diagonals, subtract the three upward diagonals:
8.4 Properties of the Determinant
| Property | Formula / Rule |
|---|---|
| Product rule | det(AB) = det(A) · det(B) |
| Transpose | det(A^T) = det(A) |
| Scalar scaling | det(kA) = k^n · det(A) for n×n matrix |
| Row swap | Swapping two rows negates the determinant |
| Row scale | Multiplying one row by k multiplies det by k |
| Row add | Adding a multiple of one row to another: det unchanged |
| Identity | det(I) = 1 |
| Triangular matrix | det = product of diagonal entries |
| Singular matrix | det = 0 (no inverse exists) |
9. Matrix Trace
The trace of a square matrix is the sum of its main diagonal elements.
Example:
| 5 | 2 | 1 |
| 0 | 3 | 7 |
| 4 | 0 | 9 |
tr(A) = 5 + 3 + 9 = 17
Key properties: tr(A) equals the sum of all eigenvalues of A. tr(AB) = tr(BA) (cyclic permutation property). tr(A + B) = tr(A) + tr(B).
10. Matrix Rank
The rank of a matrix is the number of linearly independent rows (equivalently, columns). It equals the number of non-zero rows in the RREF of the matrix.
To find rank, row-reduce to echelon form and count pivots. Example — find rank of:
| 1 | 2 | 3 |
| 2 | 4 | 6 |
| 0 | 1 | 2 |
Step 1: R2 → R2 − 2·R1 gives [0,0,0]. Step 2: R3 stays [0,1,2]. After RREF:
| 1 | 0 | -1 |
| 0 | 1 | 2 |
| 0 | 0 | 0 |
Two non-zero rows → rank = 2.
11. Matrix Norm
A matrix norm measures the 'size' of a matrix. Common choices:
| Norm | Formula | Description |
|---|---|---|
| Frobenius | √(Σᵢⱼ aᵢⱼ²) | Square root of sum of all squared entries |
| 1-norm | max over columns: Σᵢ |aᵢⱼ| | Maximum absolute column sum |
| ∞-norm | max over rows: Σⱼ |aᵢⱼ| | Maximum absolute row sum |
Example — Frobenius norm of [[1,2],[3,4]]: √(1+4+9+16) = √30 ≈ 5.477.
12. Matrix Inverse
The inverse A⁻¹ of a square matrix A satisfies A·A⁻¹ = A⁻¹·A = I. A matrix is invertible (non-singular) if and only if det(A) ≠ 0.
12.1 Direct Formula for 2×2 Inverse
Example: find the inverse of [[2,1],[5,3]]. det = 2·3 − 1·5 = 6 − 5 = 1.
| 2 | 1 |
| 5 | 3 |
A⁻¹ = (1/1)·[[3,−1],[−5,2]] =
| 3 | -1 |
| -5 | 2 |
12.2 Gauss-Jordan Method for Larger Matrices
Form the augmented matrix [A|I] and row-reduce until the left side becomes I. The right side is then A⁻¹.
Example: find the inverse of [[1,2],[3,4]].
| 1 | 2 | 1 | 0 |
| 3 | 4 | 0 | 1 |
R2 → R2 − 3·R1:
| 1 | 2 | 1 | 0 |
| 0 | -2 | -3 | 1 |
R2 → R2 / (−2):
| 1 | 2 | 1 | 0 |
| 0 | 1 | 3/2 | -1/2 |
R1 → R1 − 2·R2:
| 1 | 0 | -2 | 1 |
| 0 | 1 | 3/2 | -1/2 |
So A⁻¹ = [[-2,1],[3/2,−1/2]]. Verify: A·A⁻¹ = I.
Properties: (AB)⁻¹ = B⁻¹ A⁻¹. (A^T)⁻¹ = (A⁻¹)^T. (A⁻¹)⁻¹ = A.
13. Cofactor Matrix & Adjugate
The minor M_{ij} is the determinant of the submatrix formed by deleting row i and column j. The cofactor C_{ij} is the signed minor:
The cofactor matrix (or comatrix) is the matrix of all cofactors. The adjugate (classical adjoint) is the transpose of the cofactor matrix:
The sign pattern of the cofactor matrix for a 3×3 matrix is:
| + | − | + |
| − | + | − |
| + | − | + |
14. Reduced Row Echelon Form (RREF)
A matrix is in RREF if: (1) all zero rows are at the bottom, (2) the leading entry (pivot) of each non-zero row is 1, (3) each pivot lies to the right of the pivot above it, and (4) all other entries in each pivot column are 0.
Step-by-Step Example
Starting matrix:
| 2 | 4 | 2 |
| 1 | 2 | 3 |
| 3 | 1 | 1 |
R1 ↔ R2 (swap to get a 1 in the pivot position):
| 1 | 2 | 3 |
| 2 | 4 | 2 |
| 3 | 1 | 1 |
R2 → R2 − 2·R1, R3 → R3 − 3·R1:
| 1 | 2 | 3 |
| 0 | 0 | -4 |
| 0 | -5 | -8 |
R2 ↔ R3:
| 1 | 2 | 3 |
| 0 | -5 | -8 |
| 0 | 0 | -4 |
Scale rows to get leading 1s and eliminate above pivots to reach RREF.
15. LU Decomposition
LU decomposition factors a square matrix as PA = LU, where P is a permutation matrix (records row swaps), L is unit lower triangular (1s on diagonal), and U is upper triangular.
Example: for A = [[2,1,1],[4,3,3],[8,7,9]], one possible factorization gives:
L (lower triangular with 1s on diagonal):
| 1 | 0 | 0 |
| 2 | 1 | 0 |
| 4 | 3 | 1 |
U (upper triangular):
| 2 | 1 | 1 |
| 0 | 1 | 1 |
| 0 | 0 | 2 |
Applications: det(A) = ±(product of diagonal of U). Solve Ax = b via forward then back substitution. Each additional right-hand side costs only O(n²) after the O(n³) factorization.
16. Eigenvalues & Eigenvectors
A scalar λ is an eigenvalue of A, and nonzero vector v is the corresponding eigenvector, if Av = λv. Eigenvalues are the roots of the characteristic polynomial det(A − λI) = 0.
Worked 2×2 Example
Find eigenvalues of A = [[4,1],[2,3]].
Form A − λI:
| 4-λ | 1 |
| 2 | 3-λ |
Characteristic polynomial: (4−λ)(3−λ) − (1)(2) = 0
Eigenvalues: λ₁ = 5, λ₂ = 2.
Find eigenvector for λ₁ = 5: solve (A − 5I)v = 0, i.e., [[-1,1],[2,-2]]v = 0 → v = [1,1] (up to scale).
Find eigenvector for λ₂ = 2: solve (A − 2I)v = 0, i.e., [[2,1],[2,1]]v = 0 → v = [1,−2] (up to scale).
Key properties: sum of eigenvalues = tr(A) = 4+3 = 7 = 5+2 ✓. Product of eigenvalues = det(A) = 12−2 = 10 = 5·2 ✓.
17. QR Decomposition
QR decomposition factors A = QR where Q has orthonormal columns (Q^T Q = I) and R is upper triangular. Computed via Gram-Schmidt orthogonalization.
Example: for A = [[1,1],[1,0],[0,1]], the QR decomposition gives Q (3×2) with orthonormal columns and R (2×2) upper triangular:
| 1/√2 | 1/√6 |
| 1/√2 | -1/√6 |
| 0 | 2/√6 |
| √2 | 1/√2 |
| 0 | √(3/2) |
Applications: Solving least-squares problems (Ax ≈ b when m > n). QR algorithm for computing eigenvalues. Numerically more stable than Gaussian elimination.
18. Singular Value Decomposition (SVD)
Every real m×n matrix A has an SVD: A = UΣV^T, where U (m×k) has orthonormal columns, Σ is diagonal with non-negative singular values σ₁ ≥ σ₂ ≥ ... ≥ 0, and V (n×k) has orthonormal columns.
Singular values σᵢ = √(eigenvalues of A^T A). The number of non-zero singular values equals rank(A).
Example: A = [[1,2],[3,4],[5,6]] has singular values approximately σ₁ ≈ 9.525, σ₂ ≈ 0.514. U is 3×2, Σ is 2×2, V^T is 2×2.
| Concept | Formula / Fact |
|---|---|
| Singular values | σᵢ = √(λᵢ of A^T A) |
| Rank | Number of non-zero σᵢ |
| Pseudo-inverse | A⁺ = V Σ⁺ U^T |
| Frobenius norm | ‖A‖_F = √(σ₁² + ... + σₖ²) |
Applications: Principal Component Analysis (PCA). Image and data compression (low-rank approximation). Solving under/over-determined systems via pseudo-inverse.
19. Special Matrices
| Type | Description | Key Property |
|---|---|---|
| Zero matrix | All entries are 0 | A + 0 = A; 0·A = 0 |
| Identity matrix I | 1s on diagonal, 0s elsewhere | AI = IA = A |
| Diagonal matrix | Non-zero entries only on main diagonal | det = product of diagonal |
| Symmetric matrix | A = A^T (equals its transpose) | Always has real eigenvalues |
| Skew-symmetric | A = −A^T (diagonal entries must be 0) | Eigenvalues are purely imaginary or 0 |
| Orthogonal matrix | A^T A = I (columns are orthonormal) | det(A) = ±1; preserves lengths |
| Upper triangular | All entries below diagonal are 0 | det = product of diagonal entries |
| Lower triangular | All entries above diagonal are 0 | det = product of diagonal entries |
| Singular matrix | det(A) = 0 | No inverse; rows/cols linearly dependent |
| Idempotent matrix | A² = A | Eigenvalues are 0 or 1 |
- Matrix Addition Calculator — Add two matrices
- Matrix Subtraction Calculator — Subtract two matrices
- Matrix Multiplication Calculator — Multiply two matrices
- Matrix Determinant Calculator — Find the determinant
- Matrix Inverse Calculator — Find the inverse
- Matrix Transpose Calculator — Transpose a matrix
- Matrix Rank Calculator — Find the rank
- Matrix Trace Calculator — Find the trace
- Matrix Norm Calculator — Compute matrix norms
- Matrix Power Calculator — Raise a matrix to a power
- Eigenvalues Calculator — Find eigenvalues and eigenvectors
- LU Decomposition Calculator — LU decomposition
- QR Decomposition Calculator — QR decomposition
- SVD Calculator — Singular value decomposition
- RREF Calculator — Reduced row echelon form
- Cofactor Matrix Calculator — Compute the cofactor matrix
- Adjugate Matrix Calculator — Compute the adjugate
- Hadamard Product Calculator — Element-wise matrix multiplication
- Kronecker Product Calculator — Tensor product of matrices
Frequently Asked Questions
What is the difference between matrix multiplication and element-wise multiplication?
Matrix multiplication (AB) computes dot products between rows of A and columns of B, requiring the inner dimensions to match. Element-wise (Hadamard) multiplication simply multiplies corresponding entries and requires both matrices to have identical dimensions.
When does a matrix have no inverse?
A matrix has no inverse (it is singular) when its determinant equals zero. This happens when the rows (or columns) are linearly dependent — one row is a linear combination of the others. Geometrically, the transformation collapses space to a lower dimension.
How do I find the determinant of a 4×4 or larger matrix?
Use cofactor expansion recursively — expand along the row or column with the most zeros to minimize arithmetic. Alternatively, row-reduce to upper triangular form (keeping track of sign changes from row swaps and factors from row scaling), then multiply the diagonal entries.
What is the relationship between eigenvalues, trace, and determinant?
For an n×n matrix: the sum of all eigenvalues (with multiplicity) equals the trace, and the product of all eigenvalues equals the determinant. This holds even for complex eigenvalues.
What does it mean for a matrix to have full rank?
A matrix has full rank when its rank equals the smaller of its dimensions — min(m,n). For a square n×n matrix, full rank means rank = n, which is equivalent to the matrix being invertible (det ≠ 0) and having no linearly dependent rows or columns.
Why is matrix multiplication not commutative?
The (i,j) entry of AB involves the i-th row of A and j-th column of B, while BA involves different pairings. Geometrically, applying transformation A then B is generally different from applying B then A. Often BA is not even defined when AB is.
What is the pseudo-inverse and when do I need it?
The Moore-Penrose pseudo-inverse A⁺ generalizes the inverse to non-square and singular matrices. It is computed via SVD as A⁺ = V Σ⁺ U^T. Use it when solving over-determined (more equations than unknowns) or under-determined systems in the least-squares sense.
What is the difference between LU and QR decomposition?
LU decomposition (PA = LU) is optimized for square systems — solving Ax = b efficiently and computing determinants. QR decomposition (A = QR) is more numerically stable and is the go-to method for least-squares problems with rectangular matrices. QR is also the basis of the QR eigenvalue algorithm.
How do singular values relate to eigenvalues?
Singular values of A are the square roots of the eigenvalues of A^T A (or AA^T). They are always real and non-negative. For symmetric positive definite matrices, singular values equal eigenvalues. In general, singular values and eigenvalues of A are different.
What is the rank-nullity theorem?
For an m×n matrix A: rank(A) + nullity(A) = n (the number of columns). The nullity is the dimension of the null space — the set of all vectors x satisfying Ax = 0. The theorem means every column not containing a pivot corresponds to a free variable in the solution.