Folioby Interconnected
Log InSign Up

Linear Independence, Bases, and Dimension

Every basis of a finite-dimensional vector space has the same number of elements — this fact, proved via the Steinitz exchange lemma, makes dimension well-defined. We build to it through linear combinations, span, and the precise criterion for linear independence.

FO
Folio Official
March 1, 2026

1. Linear Combinations and Span

Definition 1 (Linear combination).
Let v1​,…,vn​ be vectors in a vector space V and let a1​,…,an​∈K be scalars. The expression
a1​v1​+a2​v2​+⋯+an​vn​
is called a linear combination of v1​,…,vn​.
Definition 2 (Span).
Let S={v1​,…,vn​}⊆V. The set of all linear combinations of elements of S,
span(S)={i=1∑n​ai​vi​​ai​∈K},
is called the span of S. When span(S)=V, we say that SspansV (or that S is a spanning set for V).
Example 3.
In R3, we have span{(1,0,0),(0,1,0)}={(x,y,0)}, the xy-plane. Adding the third standard vector yields span{(1,0,0),(0,1,0),(0,0,1)}=R3.

2. Linear Independence and Dependence

Definition 4 (Linear independence).
Vectors v1​,…,vn​∈V are linearly independent if
a1​v1​+a2​v2​+⋯+an​vn​=0⟹a1​=a2​=⋯=an​=0.
If the vectors are not linearly independent, they are called linearly dependent.
Theorem 5.
The vectors v1​,…,vn​ are linearly dependent if and only if some vk​ can be expressed as a linear combination of the remaining vectors.
Proof.
(⇒) By linear dependence, there exist scalars a1​,…,an​, not all zero, such that ∑ai​vi​=0. Choose k with ak​=0. Then vk​=−ak−1​∑i=k​ai​vi​.

(⇐) If vk​=∑i=k​bi​vi​, then ∑i=k​bi​vi​−vk​=0 is a nontrivial linear relation. □
Example 6.
The vectors v1​=(1,0,0), v2​=(0,1,0), v3​=(1,1,0) are linearly dependent, since v3​=v1​+v2​.

The vectors v1​=(1,0,0), v2​=(0,1,0), v3​=(0,0,1) are linearly independent. Indeed, a1​(1,0,0)+a2​(0,1,0)+a3​(0,0,1)=(0,0,0) immediately gives a1​=a2​=a3​=0.

3. Bases

Definition 7 (Basis).
A set {v1​,…,vn​} of vectors in a vector space V is a basis for V if it satisfies the following two conditions simultaneously:
  1. {v1​,…,vn​} spans V.

  2. {v1​,…,vn​} is linearly independent.

Theorem 8 (Unique representation with respect to a basis).
If {v1​,…,vn​} is a basis for V, then every vector v∈V can be written uniquely as
v=a1​v1​+a2​v2​+⋯+an​vn​.
Proof.
Existence follows from the fact that the basis spans V. For uniqueness, suppose v=∑ai​vi​=∑bi​vi​. Then ∑(ai​−bi​)vi​=0, and linear independence forces ai​−bi​=0 for each i, so ai​=bi​. □
Definition 9 (Coordinates).
Given a basis {v1​,…,vn​}, the scalars (a1​,…,an​) in the unique representation v=∑ai​vi​ are called the coordinates of v with respect to that basis.

4. The Steinitz Exchange Lemma

Theorem 10 (Steinitz exchange lemma).
Let V be a vector space. If {v1​,…,vm​} spans V and {w1​,…,wn​} is linearly independent, then
n≤m.
Moreover, n of the vectors v1​,…,vm​ can be replaced by w1​,…,wn​ without losing the spanning property.
Proof.
Since w1​ lies in span{v1​,…,vm​}, we can write w1​=∑j=1m​cj​vj​. At least one coefficient is nonzero (because w1​=0). Re-indexing if necessary, assume c1​=0. Then v1​=c1−1​(w1​−∑j=2m​cj​vj​), so {w1​,v2​,…,vm​} still spans V.

Repeat this process for w2​,w3​,… in turn. At the k-th step, we add wk​ to the spanning set and remove one of the v's. The linear independence of w1​,…,wk​ guarantees that a removable v always exists.

If n>m, then after m steps all the v's would be exhausted and {w1​,…,wm​} would span V. But then wm+1​ would be a linear combination of w1​,…,wm​, contradicting the linear independence of {w1​,…,wm+1​}. Therefore n≤m. □

5. Dimension

Theorem 11 (All bases have the same size).
If a vector space V has a finite basis, then every basis of V has the same number of elements.
Proof.
Let {v1​,…,vm​} and {w1​,…,wn​} be two bases of V. The first is a spanning set and the second is linearly independent, so n≤m by the Steinitz exchange lemma. Reversing the roles gives m≤n, and therefore m=n. □
Definition 12 (Dimension).
The number of elements in any basis of a vector space V is called the dimension of V and is denoted dimV. If no finite basis exists, we write dimV=∞.
Theorem 13.
If dimV=n, then:
  1. Every linearly independent set in V contains at most n vectors.

  2. Any n linearly independent vectors in V form a basis.

  3. Any n vectors that span V form a basis.

Proof.
(1) This is an immediate consequence of the Steinitz exchange lemma.

(2) Suppose {w1​,…,wn​} is linearly independent but does not span V. Then there exists v∈V with v∈/span{w1​,…,wn​}, and {w1​,…,wn​,v} is a linearly independent set of n+1 vectors, contradicting (1).

(3) Suppose {v1​,…,vn​} spans V but is linearly dependent. Then some vk​ is a linear combination of the rest, and removing it leaves n−1 vectors that still span V. But V has a basis of n linearly independent vectors, so by Steinitz we would need n≤n−1, a contradiction. □
Example 14.
  • dimRn=n: the standard basis is {e1​,…,en​}.

  • dimK[x]≤n​=n+1: a basis is {1,x,x2,…,xn}.

  • dimMm×n​(K)=mn: a basis is {Eij​}, where Eij​ is the matrix with 1 in position (i,j) and 0 elsewhere.

Linear AlgebraAlgebraTextbookLinear IndependenceBasisDimension
FO
Folio Official

Mathematics "between the lines" — exploring the intuition textbooks leave out, written in LaTeX on Folio.

1 followers·107 articles
Linear Algebra TextbookPart 3 of 13
Previous
Vector Spaces: Definitions and First Properties
Next
Linear Maps: Structure-Preserving Maps Between Vector Spaces

Share your expertise with the world

Write articles with LaTeX support, build your audience, and earn from your knowledge.

Start Writing — It's Free

More from Folio Official

Folio Official·March 1, 2026

Matrices and Representation of Linear Maps

Every linear map between finite-dimensional spaces is uniquely represented by a matrix once bases are chosen, and composition of maps corresponds to matrix multiplication. We derive the change-of-basis formula, characterize invertible matrices, and show that matrix rank equals the rank of the corresponding linear map.

Linear AlgebraAlgebraTextbook
Folio Official·March 1, 2026

The Determinant: A Scalar Invariant of Square Matrices

The determinant of a square matrix is the unique scalar-valued function characterized by alternation and multilinearity. We construct it via the Leibniz formula, prove the product formula det(AB) = det(A)det(B), and derive Cramer's rule for solving linear systems.

Linear AlgebraAlgebraTextbook
Folio Official·March 1, 2026

Jordan Normal Form: Beyond Diagonalization

Every complex square matrix is similar to a Jordan normal form — a block-diagonal matrix of Jordan blocks that is unique up to block ordering. We prove existence and uniqueness, connect the Jordan form to the minimal polynomial, and apply it to compute the matrix exponential e^{tA}.

Linear AlgebraAlgebraTextbook
Folio Official·March 24, 2026

The Singular Value Decomposition: Structure of Arbitrary Matrices

Every real m x n matrix factors as A = U Sigma V^T, where U and V are orthogonal and Sigma is diagonal — this is the singular value decomposition. We prove existence, show how the SVD yields optimal low-rank approximations (Eckart–Young theorem), and construct the Moore–Penrose pseudoinverse for least-squares solutions.

Linear AlgebraAlgebraTextbook