Folioby Interconnected
Log InSign Up

What is "dimension," really? The truth about degrees of freedom

We all say "three-dimensional space" without blinking — but what exactly does the "three" mean? The answer is less obvious than it seems, and proving it requires the Steinitz exchange lemma.

FO
Folio Official
March 1, 2026

Everyone knows that space is three-dimensional. But what, precisely, does that mean?

The intuitive answer is: "you need three numbers to specify a point." A point in R3 is a triple (x,y,z), so there are three degrees of freedom. That is certainly true — but it hides a subtlety that most textbooks address only in passing.

1 The hidden difficulty

The catch is that the choice of coordinates is not unique. You can describe points in R2 using the standard basis {(1,0),(0,1)}, but you can equally well use the basis {(1,1),(1,−1)}:

Example 1.
The point (3,1) in standard coordinates becomes
(3,1)=2⋅(1,1)+1⋅(1,−1)
in the new basis. The coordinates have changed — from (3,1) to (2,1) — but the number of coordinates is still two.

Is this always the case? Could some clever choice of basis for R3 use only two vectors? Or could you need four? The answer to both is no, and proving it is the central achievement of the theory of dimension.

2 The Steinitz exchange lemma

The key result is this:

Theorem 2 (Steinitz exchange lemma).
Let V be a vector space. If {v1​,…,vm​} spans V and {w1​,…,wn​} is linearly independent, then n≤m.

In plain language: a linearly independent set can never be larger than a spanning set.

Example 3.
In V=R3, the standard basis {e1​,e2​,e3​} spans the space (m=3). If a set {w1​,w2​,w3​,w4​} were linearly independent, the lemma would give 4≤3 — a contradiction. So any four vectors in R3 must be linearly dependent.
Example 4 (The exchange procedure).
Let us see the exchange in action. Take the spanning set {e1​,e2​} of R2 and the linearly independent set {w1​} with w1​=(3,2).

Since w1​=3e1​+2e2​, we can replace e1​ with w1​ to get the new spanning set {w1​,e2​}. Indeed, e1​=31​(w1​−2e2​), so anything expressible in the old basis is expressible in the new one.

3 The definition of dimension

The Steinitz lemma has an immediate and powerful corollary:

Theorem 5.
Any two bases of a vector space V have the same number of elements.
Proof.
Let B1​={v1​,…,vm​} and B2​={w1​,…,wn​} be two bases. Since B1​ spans and B2​ is independent, n≤m. Since B2​ spans and B1​ is independent, m≤n. Therefore m=n. □

This common value is called the dimension of V, denoted dimV.

Example 6.
  • dimRn=n (standard basis {e1​,…,en​})

  • dimPn​=n+1 (basis {1,x,x2,…,xn} for polynomials of degree ≤n)

  • dimMm×n​(R)=mn (basis: matrices with a single 1 in each position)

  • dim{0}=0 (the zero space; its basis is the empty set)

4 Infinite-dimensional spaces

Not every vector space has a finite basis.

Example 7.
The space R[x] of all real polynomials has the set {1,x,x2,x3,…} as a basis, but no finite subset spans the space — generating a polynomial of degree N requires at least N+1 basis elements. So dimR[x]=∞.
Example 8.
The space C[0,1] of continuous functions on [0,1] is also infinite-dimensional. Fourier analysis — the idea that a function can be expanded in terms of infinitely many sines and cosines — is a manifestation of this infinite-dimensionality.

5 The dimension formula

Dimensions add, but with a correction term:

Theorem 9 (Dimension formula).
For finite-dimensional subspaces W1​,W2​ of a vector space V,
dim(W1​+W2​)=dimW1​+dimW2​−dim(W1​∩W2​).

This is the vector-space analogue of the inclusion-exclusion principle for sets.

Example 10.
In V=R3, let W1​ be the xy-plane (dimW1​=2) and W2​ the xz-plane (dimW2​=2). Their intersection W1​∩W2​ is the x-axis (dim=1), so
dim(W1​+W2​)=2+2−1=3=dimR3.
Two planes through the origin, meeting only along a line, together span all of R3.

6 Why it matters

Dimension is the most fundamental invariant of a vector space. It tells you that R2 and R3 are genuinely different objects — not just different in appearance, but in structure. That this invariant is well-defined (independent of the choice of basis) is not obvious; it is a theorem, and a deep one. Without the Steinitz exchange lemma, we would have no guarantee that "the number of parameters" is a meaningful concept at all.

Linear AlgebraAlgebraBetween the Lines
FO
Folio Official

Mathematics "between the lines" — exploring the intuition textbooks leave out, written in LaTeX on Folio.

1 followers·107 articles
Linear Algebra — Between the LinesPart 2 of 6
Previous
Why define vector spaces axiomatically? From arrows to axioms
Next
Why matrix multiplication works that way: when linear maps become matrices

Share your expertise with the world

Write articles with LaTeX support, build your audience, and earn from your knowledge.

Start Writing — It's Free

More from Folio Official

Folio Official·March 1, 2026

The geometry that inner products unlock: orthogonality, projection, and least squares

A vector space, by itself, has no concept of length or angle. Inner products supply both — and with them come orthogonal projections, the Gram–Schmidt process, least squares, and the bridge to Fourier analysis.

Linear AlgebraAlgebraBetween the Lines
Folio Official·March 1, 2026

Cosets and quotient groups: the art of controlled forgetting

The quotient group G/N is the first major conceptual hurdle in group theory. By computing small examples by hand -- from clock arithmetic to D4 modulo its center -- we build the intuition for what "dividing" a group really means, and why normality is indispensable.

Group TheoryAlgebraBetween the Lines
1
Folio Official·March 1, 2026

Matrices and Representation of Linear Maps

Every linear map between finite-dimensional spaces is uniquely represented by a matrix once bases are chosen, and composition of maps corresponds to matrix multiplication. We derive the change-of-basis formula, characterize invertible matrices, and show that matrix rank equals the rank of the corresponding linear map.

Linear AlgebraAlgebraTextbook
Folio Official·March 1, 2026

The Determinant: A Scalar Invariant of Square Matrices

The determinant of a square matrix is the unique scalar-valued function characterized by alternation and multilinearity. We construct it via the Leibniz formula, prove the product formula det(AB) = det(A)det(B), and derive Cramer's rule for solving linear systems.

Linear AlgebraAlgebraTextbook