Folioby Interconnected
Log InSign Up

The Determinant: A Scalar Invariant of Square Matrices

The determinant of a square matrix is the unique scalar-valued function characterized by alternation and multilinearity. We construct it via the Leibniz formula, prove the product formula det(AB) = det(A)det(B), and derive Cramer's rule for solving linear systems.

FO
Folio Official
March 1, 2026

1. Permutations and Their Signs

Definition 1 (Permutation).
A bijection σ:{1,…,n}→{1,…,n} is called a permutation of degree n. The set of all such permutations forms a group under composition, denoted Sn​, with ∣Sn​∣=n!.
Definition 2 (Transpositions and the sign of a permutation).
A permutation that interchanges exactly two elements and fixes all others is called a transposition. Every permutation can be written as a product of transpositions. If σ decomposes into an even number of transpositions, we set sgn(σ)=+1; if an odd number, sgn(σ)=−1. The value sgn(σ) is called the sign (or signature) of σ.
Theorem 3.
For any permutations σ,τ∈Sn​,
sgn(στ)=sgn(σ)⋅sgn(τ).
Proof.
Write σ as a product of p transpositions and τ as a product of q transpositions. Then στ is a product of p+q transpositions, so sgn(στ)=(−1)p+q=(−1)p(−1)q=sgn(σ)⋅sgn(τ). □

2. The Definition of the Determinant

Definition 4 (Determinant).
Let A=(aij​) be an n×n matrix. The determinant of A is the scalar
detA=σ∈Sn​∑​sgn(σ)i=1∏n​ai,σ(i)​.
Example 5.
For n=2: det(ac​bd​)=ad−bc.

For n=3 (Sarrus' rule): det​a11​a21​a31​​a12​a22​a32​​a13​a23​a33​​​=a11​a22​a33​+a12​a23​a31​+a13​a21​a32​−a13​a22​a31​−a12​a21​a33​−a11​a23​a32​

3. Alternating and Multilinear Properties of the Determinant

Theorem 6 (Fundamental properties of the determinant).
The determinant satisfies the following properties with respect to its rows (and, by symmetry, its columns):
  1. Multilinearity. The determinant is linear in each row separately.

  2. Alternation. Interchanging two rows reverses the sign of the determinant.

  3. Normalization.detIn​=1.

Proof.
(1) Multilinearity: Suppose the i-th row of A has the form ai​=cai′​+dai′′​. In the formula detA=∑σ​sgn(σ)∏k​ak,σ(k)​, only the factor coming from the i-th row is affected, so by linearity of products we obtain detA=cdetA′+ddetA′′, where A′ and A′′ are the matrices obtained by replacing the i-th row with ai′​ and ai′′​ respectively.

(2) Alternation: Let A′ be the matrix obtained from A by swapping rows i and j. The substitution σ↦σ∘(ij) in the defining sum introduces a factor of sgn((ij))=−1, giving detA′=−detA.

(3) Normalization: The (i,σ(i)) entry of In​ is 1 if σ=id and otherwise at least one factor in the product is 0. Hence detIn​=sgn(id)⋅1=1. □
Theorem 7.
The following consequences hold:
  1. If two rows of A are equal, then detA=0.

  2. If some row of A is the zero vector, then detA=0.

  3. Adding a scalar multiple of one row to another leaves the determinant unchanged.

  4. Scaling a single row by c multiplies the determinant by c.

  5. detAT=detA.

Proof.
(1) If rows i and j are equal, swapping them leaves the matrix unchanged, so detA=−detA by alternation, whence detA=0.

(2) Set c=0 in the multilinearity property.

(3) The operation Ri​→Ri​+cRj​ yields, by multilinearity, detA+cdetA′, where A′ has rows i and j equal; by (1), detA′=0.

(4) This is immediate from multilinearity.

(5) In the defining formula, replacing σ by σ−1 and observing that sgn(σ−1)=sgn(σ) yields detAT=detA. □

4. Cofactor Expansion

Definition 8 (Cofactors).
Let Mij​ denote the determinant of the (n−1)×(n−1) submatrix obtained by deleting the i-th row and j-th column of A. The quantity a~ij​=(−1)i+jMij​ is called the (i,j)cofactor of A.
Theorem 9 (Cofactor expansion).
For any fixed row index i (expansion along the i-th row):
detA=j=1∑n​aij​a~ij​.
For any fixed column index j (expansion along the j-th column):
detA=i=1∑n​aij​a~ij​.
Proof.
We prove the row expansion. Group the terms of detA=∑σ∈Sn​​sgn(σ)∏k=1n​ak,σ(k)​ according to the value σ(i)=j. For each such j, the remaining assignment {1,…,n}∖{i}→{1,…,n}∖{j} is an (n−1)-permutation σ′, and one checks by counting transpositions that sgn(σ)=(−1)i+jsgn(σ′). Therefore
detA=j=1∑n​aij​(−1)i+jσ′∑​sgn(σ′)k=i∏​ak,σ′(k)​=j=1∑n​aij​a~ij​.
The column expansion follows by combining detA=detAT with the row expansion. □
Example 10.
Let A=​201​1−10​321​​. Expanding along the first row gives
detA=2⋅det(−10​21​)−1⋅det(01​21​)+3⋅det(01​−10​)
=2(−1)−1(−2)+3(1)=−2+2+3=3.

5. The Product Formula

Theorem 11 (Multiplicativity of the determinant).
For any n×n matrices A and B,
det(AB)=detA⋅detB.
Proof.
If detA=0, then A is singular, so rank(AB)≤rankA<n, hence AB is also singular, and both sides equal zero.

If detA=0, then A is invertible and can be written as a product of elementary matrices: A=E1​E2​⋯Ek​. One verifies from the basic properties that det(Ei​B)=detEi​⋅detB for each elementary matrix Ei​. The general result follows by induction. □
Theorem 12.
A square matrix A is invertible if and only if detA=0. In that case, det(A−1)=(detA)−1.
Proof.
If A is invertible, then AA−1=I, so the product formula gives detA⋅det(A−1)=1. It follows that detA=0 and det(A−1)=(detA)−1.

Conversely, if detA=0 but A were singular, then rankA<n and row reduction would produce a zero row, forcing detA=0— a contradiction. □

6. Cramer's Rule

Theorem 13 (Cramer's rule).
Let A be an invertible n×n matrix. The unique solution of Ax=b is given by
xi​=detAdetAi​​(i=1,…,n),
where Ai​ is the matrix formed by replacing the i-th column of A with b.
Proof.
Since A is invertible, the solution is x=A−1b. Applying the formula A−1=detA1​AT (where AT is the transpose of the cofactor matrix), we obtain
xi​=detA1​j=1∑n​a~ji​bj​=detAdetAi​​,
the last equality holding because ∑j=1n​bj​a~ji​ is precisely the cofactor expansion of detAi​ along its i-th column. □
Theorem 14 (The adjugate formula for the inverse).
For an invertible matrix A,
A−1=detA1​AT,
where AT denotes the transpose of the cofactor matrix (the adjugate of A).
Proof.
Let AT=(a~ji​). The (i,k) entry of AAT is
(AAT)ik​=j=1∑n​aij​a~kj​.
When i=k, this is the cofactor expansion of detA along the i-th row, so it equals detA. When i=k, it is the determinant of the matrix obtained by replacing the k-th row of A with a copy of the i-th row; since two rows are equal, this determinant vanishes. Therefore AAT=(detA)I. Since detA=0, dividing both sides by detA gives A⋅detA1​AT=I, and one checks detA1​AT⋅A=I in the same way. □
Linear AlgebraAlgebraTextbookDeterminantCofactor ExpansionCramer's Rule
FO
Folio Official

Mathematics "between the lines" — exploring the intuition textbooks leave out, written in LaTeX on Folio.

1 followers·107 articles
Linear Algebra TextbookPart 7 of 13
Previous
Systems of Linear Equations and Row Reduction
Next
Eigenvalues and Eigenvectors: Invariant Directions of Linear Maps

Share your expertise with the world

Write articles with LaTeX support, build your audience, and earn from your knowledge.

Start Writing — It's Free

More from Folio Official

Folio Official·March 1, 2026

Matrices and Representation of Linear Maps

Every linear map between finite-dimensional spaces is uniquely represented by a matrix once bases are chosen, and composition of maps corresponds to matrix multiplication. We derive the change-of-basis formula, characterize invertible matrices, and show that matrix rank equals the rank of the corresponding linear map.

Linear AlgebraAlgebraTextbook
Folio Official·March 1, 2026

Jordan Normal Form: Beyond Diagonalization

Every complex square matrix is similar to a Jordan normal form — a block-diagonal matrix of Jordan blocks that is unique up to block ordering. We prove existence and uniqueness, connect the Jordan form to the minimal polynomial, and apply it to compute the matrix exponential e^{tA}.

Linear AlgebraAlgebraTextbook
Folio Official·March 24, 2026

The Singular Value Decomposition: Structure of Arbitrary Matrices

Every real m x n matrix factors as A = U Sigma V^T, where U and V are orthogonal and Sigma is diagonal — this is the singular value decomposition. We prove existence, show how the SVD yields optimal low-rank approximations (Eckart–Young theorem), and construct the Moore–Penrose pseudoinverse for least-squares solutions.

Linear AlgebraAlgebraTextbook
Folio Official·March 1, 2026

Diagonalization: Simplifying Matrices by Choice of Basis

A matrix is diagonalizable if and only if the sum of its geometric multiplicities equals the matrix size. We prove this criterion, give a step-by-step diagonalization procedure with applications to computing A^n, and prove Schur's theorem that every complex square matrix is unitarily triangularizable.

Linear AlgebraAlgebraTextbook