Folioby Interconnected
Log InSign Up

Matrices and Representation of Linear Maps

Every linear map between finite-dimensional spaces is uniquely represented by a matrix once bases are chosen, and composition of maps corresponds to matrix multiplication. We derive the change-of-basis formula, characterize invertible matrices, and show that matrix rank equals the rank of the corresponding linear map.

FO
Folio Official
March 1, 2026

1. Representation Matrices

Definition 1 (Representation matrix).
Let V and W be finite-dimensional vector spaces with bases B={v1​,…,vn​} and C={w1​,…,wm​}, respectively. For a linear map T:V→W, write
T(vj​)=i=1∑m​aij​wi​(j=1,…,n).
The m×n matrix A=(aij​) is called the representation matrix (or matrix representation) of T with respect to the bases B and C, and is denoted [T]BC​.
Remark 2.
The j-th column of the representation matrix is the coordinate vector of T(vj​) with respect to C. In short, [T]BC​ is the matrix whose columns are the images of the basis vectors, expressed in coordinates.
Example 3.
Let T:R2→R3 be defined by T(x,y)=(x+y,2x,−y). With respect to the standard bases {e1​,e2​} and {f1​,f2​,f3​}:
T(e1​)=(1,2,0),T(e2​)=(1,0,−1)⟹[T]=​120​10−1​​.
Theorem 4 (Composition corresponds to matrix multiplication).
Let T:V→W and S:W→U be linear maps, and let A=[T]BC​ and B′=[S]CD​ be their representation matrices with respect to bases B,C,D. Then the representation matrix of the composition S∘T:V→U is
[S∘T]BD​=B′A.
In other words, composition of linear maps corresponds to multiplication of their matrices.

2. Change of Basis

Definition 5 (Transition matrix).
Let B={v1​,…,vn​} and B′={v1′​,…,vn′​} be two bases for V. The matrix P=(pij​) defined by
vj′​=i=1∑n​pij​vi​(j=1,…,n)
is called the transition matrix (or change-of-basis matrix) from B to B′.
Theorem 6 (Change-of-basis formula).
Let T:V→V be a linear map with representation matrix A with respect to a basis B, and representation matrix A′ with respect to a basis B′. If P is the transition matrix from B to B′, then
A′=P−1AP.
Proof.
Let x be the coordinate vector of v∈V with respect to B, and let x′ be its coordinate vector with respect to B′. Then x=Px′. The image of v under T has B-coordinates Ax, and the B′-coordinates of this image are P−1Ax=P−1APx′. Therefore A′=P−1AP. □
Definition 7 (Similar matrices).
Two square matrices A and A′ are said to be similar if there exists an invertible matrix P such that A′=P−1AP.
Remark 8.
Similar matrices represent the same linear map with respect to different bases. Consequently, similar matrices share the same eigenvalues, determinant, rank, and trace.

3. The Rank of a Matrix

Definition 9 (Column rank and row rank).
The column rank of an m×n matrix A is the dimension of the subspace of Rm spanned by the columns of A. The row rank is the dimension of the subspace of Rn spanned by the rows of A.
Theorem 10 (Column rank equals row rank).
For any matrix, the column rank and the row rank are equal. This common value is denoted rankA.
Proof.
View A as the representation matrix of the linear map TA​:Kn→Km defined by TA​(x)=Ax. The column rank is dimImTA​. Elementary row operations do not change the dimension of the image, and the number of nonzero rows in the reduced row echelon form equals the row rank. Hence the two ranks coincide. □

4. Invertible Matrices

Definition 11 (Invertible matrix).
An n×n matrix A is called invertible (or nonsingular) if there exists a matrix B such that AB=BA=In​. The matrix B is called the inverse of A and is denoted A−1.
Theorem 12 (Equivalent conditions for invertibility).
For an n×n matrix A, the following are equivalent:
  1. A is invertible.

  2. rankA=n.

  3. detA=0.

  4. The only solution to Ax=0 is x=0.

  5. The columns of A are linearly independent.

  6. The reduced row echelon form of A is In​.

  7. The map TA​:Kn→Kn is an isomorphism.

Theorem 13 (Uniqueness of the inverse).
The inverse of an invertible matrix is unique.
Proof.
If AB=BA=I and AC=CA=I, then B=BI=B(AC)=(BA)C=IC=C. □
Theorem 14 (Properties of the inverse).
If A and B are invertible matrices, then:
  1. (A−1)−1=A.

  2. (AB)−1=B−1A−1.

  3. (AT)−1=(A−1)T.

Proof.
(1) Since A−1⋅A=I and A⋅A−1=I, the matrix A serves as the inverse of A−1. By uniqueness, (A−1)−1=A.

(2) We verify directly: (AB)(B−1A−1)=A(BB−1)A−1=AIA−1=AA−1=I. Similarly, (B−1A−1)(AB)=I. By uniqueness, (AB)−1=B−1A−1.

(3) We compute (AT)(A−1)T=(A−1A)T=IT=I. Likewise, (A−1)TAT=(AA−1)T=I. By uniqueness, (AT)−1=(A−1)T. □
Linear AlgebraAlgebraTextbookRepresentation MatrixChange of BasisInvertible Matrices
FO
Folio Official

Mathematics "between the lines" — exploring the intuition textbooks leave out, written in LaTeX on Folio.

1 followers·107 articles
Linear Algebra TextbookPart 5 of 13
Previous
Linear Maps: Structure-Preserving Maps Between Vector Spaces
Next
Systems of Linear Equations and Row Reduction

Share your expertise with the world

Write articles with LaTeX support, build your audience, and earn from your knowledge.

Start Writing — It's Free

More from Folio Official

Folio Official·March 1, 2026

The Determinant: A Scalar Invariant of Square Matrices

The determinant of a square matrix is the unique scalar-valued function characterized by alternation and multilinearity. We construct it via the Leibniz formula, prove the product formula det(AB) = det(A)det(B), and derive Cramer's rule for solving linear systems.

Linear AlgebraAlgebraTextbook
Folio Official·March 1, 2026

Jordan Normal Form: Beyond Diagonalization

Every complex square matrix is similar to a Jordan normal form — a block-diagonal matrix of Jordan blocks that is unique up to block ordering. We prove existence and uniqueness, connect the Jordan form to the minimal polynomial, and apply it to compute the matrix exponential e^{tA}.

Linear AlgebraAlgebraTextbook
Folio Official·March 24, 2026

The Singular Value Decomposition: Structure of Arbitrary Matrices

Every real m x n matrix factors as A = U Sigma V^T, where U and V are orthogonal and Sigma is diagonal — this is the singular value decomposition. We prove existence, show how the SVD yields optimal low-rank approximations (Eckart–Young theorem), and construct the Moore–Penrose pseudoinverse for least-squares solutions.

Linear AlgebraAlgebraTextbook
Folio Official·March 1, 2026

Diagonalization: Simplifying Matrices by Choice of Basis

A matrix is diagonalizable if and only if the sum of its geometric multiplicities equals the matrix size. We prove this criterion, give a step-by-step diagonalization procedure with applications to computing A^n, and prove Schur's theorem that every complex square matrix is unitarily triangularizable.

Linear AlgebraAlgebraTextbook