Folioby Interconnected
Log InSign Up

Jordan Normal Form: Beyond Diagonalization

Every complex square matrix is similar to a Jordan normal form — a block-diagonal matrix of Jordan blocks that is unique up to block ordering. We prove existence and uniqueness, connect the Jordan form to the minimal polynomial, and apply it to compute the matrix exponential e^{tA}.

FO
Folio Official
March 1, 2026

1. Motivation: The Limits of Diagonalization

Not every matrix is diagonalizable. For instance, A=(20​12​) has eigenvalue 2 with algebraic multiplicity 2 but geometric multiplicity 1, so it cannot be diagonalized.

Nevertheless, even for such matrices there exists a simplest possible form to which they can be reduced. That form is the Jordan normal form.

2. Jordan Blocks

Definition 1 (Jordan block).
For λ∈K and a positive integer k, the k×k matrix
Jk​(λ)=​λO​1λ​1⋱​O1λ​​∈Mk​(K)
is called a Jordan block of size k with eigenvalue λ. Its diagonal entries are all λ, the superdiagonal entries are all 1, and every other entry is 0.
Example 2.
J1​(λ)=(λ), J2​(λ)=(λ0​1λ​), J3​(λ)=​λ00​1λ0​01λ​​.
Remark 3.
A 1×1 Jordan block J1​(λ)=(λ) is just a diagonal entry. A diagonal matrix is therefore the special case in which every Jordan block has size 1.

3. Generalized Eigenspaces

Definition 4 (Generalized eigenspace).
Let λ be an eigenvalue of A with algebraic multiplicity m. The subspace
Wλ​=ker(A−λI)m
is called the generalized eigenspace of λ.
Theorem 5.
The generalized eigenspace satisfies dimWλ​=m (equal to the algebraic multiplicity).
Theorem 6.
If the distinct eigenvalues of A are λ1​,…,λs​ with algebraic multiplicities m1​,…,ms​, then the whole space decomposes as a direct sum of generalized eigenspaces:
Kn=Wλ1​​⊕Wλ2​​⊕⋯⊕Wλs​​.

4. Existence and Uniqueness of the Jordan Normal Form

Theorem 7 (Jordan normal form).
Let K be an algebraically closed field (in particular, K=C). For any n×n matrix A, there exists an invertible matrix P such that
P−1AP=​Jk1​​(λ1​)O​Jk2​​(λ2​)​O⋱​Jkr​​(λr​)​​.
The right-hand side is called the Jordan normal form of A. Up to the ordering of the Jordan blocks, it is uniquely determined by A.
Example 8.
Suppose a 4×4 matrix A has characteristic polynomial (λ−1)2(λ−3)2, giving eigenvalues λ=1 (algebraic multiplicity 2) and λ=3 (algebraic multiplicity 2).

If dimV1​=1 and dimV3​=1, the Jordan normal form is
J=​1000​1100​0030​0013​​.

If instead dimV1​=2 and dimV3​=1, the Jordan normal form becomes
J=​1000​0100​0030​0013​​.

5. The Minimal Polynomial

Definition 9 (Minimal polynomial).
The minimal polynomialmA​(λ) of a matrix A is the monic polynomial of least degree satisfying mA​(A)=O.
Theorem 10.
  1. The minimal polynomial divides the characteristic polynomial.

  2. The minimal polynomial and the characteristic polynomial share the same roots (eigenvalues).

  3. A is diagonalizable if and only if mA​(λ) has no repeated roots.

Theorem 11.
The minimal polynomial is determined by the Jordan normal form as follows. If the largest Jordan block corresponding to each eigenvalue λi​ has size ki​, then
mA​(λ)=i=1∏s​(λ−λi​)ki​.
Example 12.
Consider the Jordan matrix J=​2000​1200​0020​0003​​. The largest Jordan block for λ=2 is 2×2, and for λ=3 it is 1×1. Hence mA​(λ)=(λ−2)2(λ−3), while the characteristic polynomial is pA​(λ)=(λ−2)3(λ−3).

6. The Matrix Exponential

Definition 13 (Matrix exponential).
For an n×n matrix A, the matrix exponential is defined by the convergent power series
eA=k=0∑∞​k!Ak​=I+A+2!A2​+3!A3​+⋯
Theorem 14.
The matrix exponential of a Jordan block is given by
etJk​(λ)=eλt​10⋮00​t1​2!t2​t⋱⋯⋯​⋯⋯⋱10​(k−1)!tk−1​(k−2)!tk−2​⋮t1​​.
Proof.
Decompose the Jordan block as Jk​(λ)=λI+N, where N is the nilpotent matrix with ones on the superdiagonal and zeros elsewhere. Since Nk=O, the matrix N is nilpotent of index k. Because λI and N commute, we have etJk​(λ)=eλtIetN=eλtetN. The series for etN terminates after k terms because Nk=O, producing the stated upper triangular matrix. □
Example 15.
The system of linear ordinary differential equations x′(t)=Ax(t) has the solution x(t)=etAx(0). When A=PJP−1, we compute etA=PetJP−1.

For A=(20​12​)=J2​(2), we obtain
etA=e2t(10​t1​).
With initial condition x(0)=(11​), the solution is x(t)=e2t(1+t1​).
Remark 16.
The Jordan normal form is a natural generalization of diagonalization. Where a diagonal matrix represents independent scalings along each coordinate axis, a Jordan block captures scaling together with an infinitesimal shearing. This shearing manifests in the matrix exponential etJk​(λ) as polynomial factors in t multiplying the exponential eλt— the hallmark of the interaction between exponential growth and algebraic (polynomial) growth that governs the behavior of non-diagonalizable linear systems.
Linear AlgebraAlgebraTextbookJordan Normal FormMinimal PolynomialMatrix Exponential
FO
Folio Official

Mathematics "between the lines" — exploring the intuition textbooks leave out, written in LaTeX on Folio.

1 followers·107 articles
Linear Algebra TextbookPart 11 of 13
Previous
Inner Product Spaces and Orthonormal Bases
Next
The Spectral Theorem: Orthogonal Diagonalization of Symmetric Matrices

Share your expertise with the world

Write articles with LaTeX support, build your audience, and earn from your knowledge.

Start Writing — It's Free

More from Folio Official

Folio Official·March 1, 2026

Matrices and Representation of Linear Maps

Every linear map between finite-dimensional spaces is uniquely represented by a matrix once bases are chosen, and composition of maps corresponds to matrix multiplication. We derive the change-of-basis formula, characterize invertible matrices, and show that matrix rank equals the rank of the corresponding linear map.

Linear AlgebraAlgebraTextbook
Folio Official·March 1, 2026

The Determinant: A Scalar Invariant of Square Matrices

The determinant of a square matrix is the unique scalar-valued function characterized by alternation and multilinearity. We construct it via the Leibniz formula, prove the product formula det(AB) = det(A)det(B), and derive Cramer's rule for solving linear systems.

Linear AlgebraAlgebraTextbook
Folio Official·March 24, 2026

The Singular Value Decomposition: Structure of Arbitrary Matrices

Every real m x n matrix factors as A = U Sigma V^T, where U and V are orthogonal and Sigma is diagonal — this is the singular value decomposition. We prove existence, show how the SVD yields optimal low-rank approximations (Eckart–Young theorem), and construct the Moore–Penrose pseudoinverse for least-squares solutions.

Linear AlgebraAlgebraTextbook
Folio Official·March 1, 2026

Diagonalization: Simplifying Matrices by Choice of Basis

A matrix is diagonalizable if and only if the sum of its geometric multiplicities equals the matrix size. We prove this criterion, give a step-by-step diagonalization procedure with applications to computing A^n, and prove Schur's theorem that every complex square matrix is unitarily triangularizable.

Linear AlgebraAlgebraTextbook