Linear Algebra Textbook
From the axioms of vector spaces to Jordan normal form. A textbook series building definitions, theorems, and proofs in a systematic progression.
Linear Algebra: A Complete Summary of Definitions, Theorems, and Proofs
A single-page survey of undergraduate linear algebra. From the axioms of a vector space to Jordan normal form, this article systematically organizes the definitions, theorems, and proofs that form the core of a first course. Includes a topic dependency diagram.
Vector Spaces: Definitions and First Properties
A vector space has exactly one zero vector and every element has a unique additive inverse — these are consequences of the eight axioms, not assumptions. We prove these uniqueness results, establish the subspace criterion for verifying subspaces efficiently, and characterize when a sum of subspaces is a direct sum.
Linear Independence, Bases, and Dimension
Every basis of a finite-dimensional vector space has the same number of elements — this fact, proved via the Steinitz exchange lemma, makes dimension well-defined. We build to it through linear combinations, span, and the precise criterion for linear independence.
Linear Maps: Structure-Preserving Maps Between Vector Spaces
The rank–nullity theorem states that dim(ker f) + dim(im f) = dim(V) for any linear map f : V → W. We prove this central result after developing kernels, images, and injectivity/surjectivity criteria, then use it to classify finite-dimensional vector spaces up to isomorphism.
Matrices and Representation of Linear Maps
Every linear map between finite-dimensional spaces is uniquely represented by a matrix once bases are chosen, and composition of maps corresponds to matrix multiplication. We derive the change-of-basis formula, characterize invertible matrices, and show that matrix rank equals the rank of the corresponding linear map.
Systems of Linear Equations and Row Reduction
The general solution of a linear system Ax = b is a particular solution plus the null space of A. We prove this structure theorem by developing Gaussian elimination and reduced row echelon form, and give precise conditions for when solutions exist and when they are unique.
The Determinant: A Scalar Invariant of Square Matrices
The determinant of a square matrix is the unique scalar-valued function characterized by alternation and multilinearity. We construct it via the Leibniz formula, prove the product formula det(AB) = det(A)det(B), and derive Cramer's rule for solving linear systems.
Eigenvalues and Eigenvectors: Invariant Directions of Linear Maps
Eigenvalues are the roots of the characteristic polynomial det(A - tI), and eigenvectors from distinct eigenvalues are always linearly independent. We prove these facts, distinguish algebraic from geometric multiplicity, and establish the Cayley–Hamilton theorem: every matrix satisfies its own characteristic equation.
Diagonalization: Simplifying Matrices by Choice of Basis
A matrix is diagonalizable if and only if the sum of its geometric multiplicities equals the matrix size. We prove this criterion, give a step-by-step diagonalization procedure with applications to computing A^n, and prove Schur's theorem that every complex square matrix is unitarily triangularizable.
Inner Product Spaces and Orthonormal Bases
The Cauchy–Schwarz inequality |<u,v>| <= ||u|| ||v|| is the cornerstone of inner product spaces. We prove it from the axioms, then develop the Gram–Schmidt process for constructing orthonormal bases, orthogonal complements, and orthogonal projections that give the closest point in a subspace.
Jordan Normal Form: Beyond Diagonalization
Every complex square matrix is similar to a Jordan normal form — a block-diagonal matrix of Jordan blocks that is unique up to block ordering. We prove existence and uniqueness, connect the Jordan form to the minimal polynomial, and apply it to compute the matrix exponential e^{tA}.
The Spectral Theorem: Orthogonal Diagonalization of Symmetric Matrices
Every real symmetric matrix can be orthogonally diagonalized — we prove this spectral theorem and its complex generalization for normal operators. The spectral decomposition A = sum lambda_i P_i into orthogonal projections is derived, and we apply it to classify quadratic forms via Sylvester's law of inertia.
The Singular Value Decomposition: Structure of Arbitrary Matrices
Every real m x n matrix factors as A = U Sigma V^T, where U and V are orthogonal and Sigma is diagonal — this is the singular value decomposition. We prove existence, show how the SVD yields optimal low-rank approximations (Eckart–Young theorem), and construct the Moore–Penrose pseudoinverse for least-squares solutions.