Folioby Interconnected
Log InSign Up
Back to articles
Series

Linear Algebra Textbook

From the axioms of vector spaces to Jordan normal form. A textbook series building definitions, theorems, and proofs in a systematic progression.

FO
Folio Official
13 articles
01
Folio Official·March 1, 2026

Linear Algebra: A Complete Summary of Definitions, Theorems, and Proofs

A single-page survey of undergraduate linear algebra. From the axioms of a vector space to Jordan normal form, this article systematically organizes the definitions, theorems, and proofs that form the core of a first course. Includes a topic dependency diagram.

Linear AlgebraAlgebraSummary
1
02
Folio Official·March 1, 2026

Vector Spaces: Definitions and First Properties

A vector space has exactly one zero vector and every element has a unique additive inverse — these are consequences of the eight axioms, not assumptions. We prove these uniqueness results, establish the subspace criterion for verifying subspaces efficiently, and characterize when a sum of subspaces is a direct sum.

Linear AlgebraAlgebraTextbook
1
03
Folio Official·March 1, 2026

Linear Independence, Bases, and Dimension

Every basis of a finite-dimensional vector space has the same number of elements — this fact, proved via the Steinitz exchange lemma, makes dimension well-defined. We build to it through linear combinations, span, and the precise criterion for linear independence.

Linear AlgebraAlgebraTextbook
04
Folio Official·March 1, 2026

Linear Maps: Structure-Preserving Maps Between Vector Spaces

The rank–nullity theorem states that dim(ker f) + dim(im f) = dim(V) for any linear map f : V → W. We prove this central result after developing kernels, images, and injectivity/surjectivity criteria, then use it to classify finite-dimensional vector spaces up to isomorphism.

Linear AlgebraAlgebraTextbook
05
Folio Official·March 1, 2026

Matrices and Representation of Linear Maps

Every linear map between finite-dimensional spaces is uniquely represented by a matrix once bases are chosen, and composition of maps corresponds to matrix multiplication. We derive the change-of-basis formula, characterize invertible matrices, and show that matrix rank equals the rank of the corresponding linear map.

Linear AlgebraAlgebraTextbook
06
Folio Official·March 1, 2026

Systems of Linear Equations and Row Reduction

The general solution of a linear system Ax = b is a particular solution plus the null space of A. We prove this structure theorem by developing Gaussian elimination and reduced row echelon form, and give precise conditions for when solutions exist and when they are unique.

Linear AlgebraAlgebraTextbook
07
Folio Official·March 1, 2026

The Determinant: A Scalar Invariant of Square Matrices

The determinant of a square matrix is the unique scalar-valued function characterized by alternation and multilinearity. We construct it via the Leibniz formula, prove the product formula det(AB) = det(A)det(B), and derive Cramer's rule for solving linear systems.

Linear AlgebraAlgebraTextbook
08
Folio Official·March 1, 2026

Eigenvalues and Eigenvectors: Invariant Directions of Linear Maps

Eigenvalues are the roots of the characteristic polynomial det(A - tI), and eigenvectors from distinct eigenvalues are always linearly independent. We prove these facts, distinguish algebraic from geometric multiplicity, and establish the Cayley–Hamilton theorem: every matrix satisfies its own characteristic equation.

Linear AlgebraAlgebraTextbook
09
Folio Official·March 1, 2026

Diagonalization: Simplifying Matrices by Choice of Basis

A matrix is diagonalizable if and only if the sum of its geometric multiplicities equals the matrix size. We prove this criterion, give a step-by-step diagonalization procedure with applications to computing A^n, and prove Schur's theorem that every complex square matrix is unitarily triangularizable.

Linear AlgebraAlgebraTextbook
10
Folio Official·March 1, 2026

Inner Product Spaces and Orthonormal Bases

The Cauchy–Schwarz inequality |<u,v>| <= ||u|| ||v|| is the cornerstone of inner product spaces. We prove it from the axioms, then develop the Gram–Schmidt process for constructing orthonormal bases, orthogonal complements, and orthogonal projections that give the closest point in a subspace.

Linear AlgebraAlgebraTextbook
11
Folio Official·March 1, 2026

Jordan Normal Form: Beyond Diagonalization

Every complex square matrix is similar to a Jordan normal form — a block-diagonal matrix of Jordan blocks that is unique up to block ordering. We prove existence and uniqueness, connect the Jordan form to the minimal polynomial, and apply it to compute the matrix exponential e^{tA}.

Linear AlgebraAlgebraTextbook
12
Folio Official·March 24, 2026

The Spectral Theorem: Orthogonal Diagonalization of Symmetric Matrices

Every real symmetric matrix can be orthogonally diagonalized — we prove this spectral theorem and its complex generalization for normal operators. The spectral decomposition A = sum lambda_i P_i into orthogonal projections is derived, and we apply it to classify quadratic forms via Sylvester's law of inertia.

Linear AlgebraAlgebraTextbook
13
Folio Official·March 24, 2026

The Singular Value Decomposition: Structure of Arbitrary Matrices

Every real m x n matrix factors as A = U Sigma V^T, where U and V are orthogonal and Sigma is diagonal — this is the singular value decomposition. We prove existence, show how the SVD yields optimal low-rank approximations (Eckart–Young theorem), and construct the Moore–Penrose pseudoinverse for least-squares solutions.

Linear AlgebraAlgebraTextbook