Folioby Interconnected
Log InSign Up

Linear Maps: Structure-Preserving Maps Between Vector Spaces

The rank–nullity theorem states that dim(ker f) + dim(im f) = dim(V) for any linear map f : V → W. We prove this central result after developing kernels, images, and injectivity/surjectivity criteria, then use it to classify finite-dimensional vector spaces up to isomorphism.

FO
Folio Official
March 1, 2026

1. Definition and Examples of Linear Maps

Definition 1 (Linear map).
Let V and W be vector spaces over K. A map T:V→W is called a linear map (or linear transformation) if it satisfies the following two conditions:
  1. T(u+v)=T(u)+T(v) (preservation of addition).

  2. T(av)=aT(v) (preservation of scalar multiplication).

Remark 2.
The two conditions can be consolidated into one: T(au+bv)=aT(u)+bT(v) for all a,b∈K and u,v∈V.
Example 3.
  • The rotation matrix Rθ​:R2→R2 defines a linear map.

  • The differentiation operator D:K[x]≤n​→K[x]≤n−1​, defined by Df=f′, is linear.

  • The definite integral I:C[a,b]→R, given by I(f)=∫ab​f(x)dx, is linear.

  • The map T:R2→R2 defined by T(v)=v+(1,0) is not linear, since T(0)=0.

Theorem 4.
Every linear map T:V→W sends the zero vector to the zero vector: T(0V​)=0W​.
Proof.
T(0)=T(0⋅0)=0⋅T(0)=0. □
Theorem 5 (A linear map is determined by its action on a basis).
Let {v1​,…,vn​} be a basis for V. For any choice of vectors w1​,…,wn​∈W, there exists a unique linear map T:V→W satisfying T(vi​)=wi​ for i=1,…,n.
Proof.
For v=∑ai​vi​, define T(v)=∑ai​wi​. The unique representation of each v with respect to the basis ensures that T is well-defined, and linearity is verified by direct computation. Uniqueness holds because the images of the basis vectors completely determine T. □

2. Kernel and Image

Definition 6 (Kernel).
The kernel (or null space) of a linear map T:V→W is the set
kerT={v∈V∣T(v)=0}.
Definition 7 (Image).
The image (or range) of a linear map T:V→W is the set
ImT={T(v)∣v∈V}={w∈W∣∃v∈V,T(v)=w}.
Theorem 8.
kerT is a subspace of V, and ImT is a subspace of W.
Proof.
For the kernel: if u,v∈kerT, then T(u+v)=T(u)+T(v)=0, so u+v∈kerT. Also, T(av)=aT(v)=a0=0, so av∈kerT.

For the image: if T(u),T(v)∈ImT, then T(u)+T(v)=T(u+v)∈ImT. Also, aT(v)=T(av)∈ImT. □
Theorem 9 (Injectivity and the kernel).
A linear map T is injective if and only if kerT={0}.
Proof.
(⇒) If v∈kerT, then T(v)=0=T(0), and injectivity gives v=0.

(⇐) If T(u)=T(v), then T(u−v)=0, so u−v∈kerT={0}, whence u=v. □

3. The Rank–Nullity Theorem

Definition 10.
The dimension dimkerT is called the nullity of T, and dimImT is called the rank of T.
Theorem 11 (Rank–nullity theorem).
Let V be a finite-dimensional vector space and let T:V→W be a linear map. Then
dimV=dimkerT+dimImT.
Proof.
Let dimkerT=r and choose a basis {u1​,…,ur​} for kerT. Extend this to a basis {u1​,…,ur​,v1​,…,vs​} for V, where r+s=dimV.

We claim that {T(v1​),…,T(vs​)} is a basis for ImT.

Spanning. Let w∈ImT. Then w=T(v) for some v=∑ai​ui​+∑bj​vj​. Since each ui​∈kerT, we have T(v)=∑bj​T(vj​).

Linear independence. Suppose ∑bj​T(vj​)=0. Then T(∑bj​vj​)=0, so ∑bj​vj​∈kerT. This means ∑bj​vj​=∑ai​ui​ for some scalars ai​, i.e. ∑ai​ui​−∑bj​vj​=0. Since {u1​,…,ur​,v1​,…,vs​} is a basis, all coefficients are zero. In particular, bj​=0 for each j.

Therefore dimImT=s=dimV−dimkerT. □
Example 12.
Let T:R3→R2 be defined by T(x,y,z)=(x+y,y+z). One checks that kerT={(t,−t,t)∣t∈R}, so dimkerT=1. By the rank–nullity theorem, dimImT=3−1=2=dimR2. Hence T is surjective.

4. The Space of Linear Maps

Definition 13.
Let V and W be vector spaces over K. The set of all linear maps from V to W is denoted HomK​(V,W). Equipped with pointwise addition (T+S)(v)=T(v)+S(v) and scalar multiplication (aT)(v)=aT(v), it is itself a vector space over K.
Theorem 14.
If dimV=n and dimW=m, then dimHomK​(V,W)=mn.
Proof.
By the theorem on determination by basis images, every T∈Hom(V,W) is determined by the images of a basis of V— that is, by n vectors in W, each having m coordinates. This correspondence is an isomorphism HomK​(V,W)≅Mm×n​(K), and dimMm×n​(K)=mn. □
Definition 15 (Isomorphism).
A linear map T:V→W that is bijective is called an isomorphism. When an isomorphism exists, we write V≅W.
Theorem 16.
For finite-dimensional vector spaces V and W, V≅W if and only if dimV=dimW.
Linear AlgebraAlgebraTextbookLinear MapsKernel and ImageRank--Nullity Theorem
FO
Folio Official

Mathematics "between the lines" — exploring the intuition textbooks leave out, written in LaTeX on Folio.

1 followers·107 articles
Linear Algebra TextbookPart 4 of 13
Previous
Linear Independence, Bases, and Dimension
Next
Matrices and Representation of Linear Maps

Share your expertise with the world

Write articles with LaTeX support, build your audience, and earn from your knowledge.

Start Writing — It's Free

More from Folio Official

Folio Official·March 1, 2026

Matrices and Representation of Linear Maps

Every linear map between finite-dimensional spaces is uniquely represented by a matrix once bases are chosen, and composition of maps corresponds to matrix multiplication. We derive the change-of-basis formula, characterize invertible matrices, and show that matrix rank equals the rank of the corresponding linear map.

Linear AlgebraAlgebraTextbook
Folio Official·March 1, 2026

The Determinant: A Scalar Invariant of Square Matrices

The determinant of a square matrix is the unique scalar-valued function characterized by alternation and multilinearity. We construct it via the Leibniz formula, prove the product formula det(AB) = det(A)det(B), and derive Cramer's rule for solving linear systems.

Linear AlgebraAlgebraTextbook
Folio Official·March 1, 2026

Jordan Normal Form: Beyond Diagonalization

Every complex square matrix is similar to a Jordan normal form — a block-diagonal matrix of Jordan blocks that is unique up to block ordering. We prove existence and uniqueness, connect the Jordan form to the minimal polynomial, and apply it to compute the matrix exponential e^{tA}.

Linear AlgebraAlgebraTextbook
Folio Official·March 24, 2026

The Singular Value Decomposition: Structure of Arbitrary Matrices

Every real m x n matrix factors as A = U Sigma V^T, where U and V are orthogonal and Sigma is diagonal — this is the singular value decomposition. We prove existence, show how the SVD yields optimal low-rank approximations (Eckart–Young theorem), and construct the Moore–Penrose pseudoinverse for least-squares solutions.

Linear AlgebraAlgebraTextbook