Folioby Interconnected
Log InSign Up

Vector Spaces: Definitions and First Properties

A vector space has exactly one zero vector and every element has a unique additive inverse — these are consequences of the eight axioms, not assumptions. We prove these uniqueness results, establish the subspace criterion for verifying subspaces efficiently, and characterize when a sum of subspaces is a direct sum.

FO
Folio Official
March 1, 2026

1. The Vector Space Axioms

Definition 1 (Vector space).
A vector space over a field K is a set V equipped with two operations — addition +:V×V→V and scalar multiplication ⋅:K×V→V— satisfying the following eight axioms:
  1. u+v=v+u (commutativity of addition).

  2. (u+v)+w=u+(v+w) (associativity of addition).

  3. There exists an element 0∈V such that v+0=v for all v∈V (existence of a zero vector).

  4. For every v∈V, there exists −v∈V such that v+(−v)=0 (existence of additive inverses).

  5. a(bv)=(ab)v (associativity of scalar multiplication).

  6. 1⋅v=v (identity element of scalar multiplication).

  7. a(u+v)=au+av (distributivity over vector addition).

  8. (a+b)v=av+bv (distributivity over scalar addition).

Here a,b∈K and u,v,w∈V.

Throughout this chapter, we work over the field K=R unless stated otherwise.

2. Fundamental Examples

Example 2 (Kn).
Rn={(a1​,…,an​)∣ai​∈R} is a vector space over R under componentwise addition and scalar multiplication. The zero vector is 0=(0,…,0).
Example 3 (Polynomial spaces).
K[x]≤n​={a0​+a1​x+⋯+an​xn∣ai​∈K} is a vector space over K under polynomial addition and scalar multiplication. The zero vector is the zero polynomial 0, and dimK[x]≤n​=n+1.
Example 4 (Matrix spaces).
The set Mm×n​(K) of all m×n matrices over K is a vector space under matrix addition and scalar multiplication. The zero vector is the zero matrix O, and dimMm×n​(K)=mn.
Example 5 (Function spaces).
The set C[a,b] of all real-valued continuous functions on the interval [a,b] forms a vector space over R under pointwise addition (f+g)(x)=f(x)+g(x) and scalar multiplication (cf)(x)=c⋅f(x). This is an infinite-dimensional vector space.

3. Uniqueness of the Zero Vector and Additive Inverses

Theorem 6 (Uniqueness of the zero vector).
The zero vector of a vector space V is unique.
Proof.
Suppose 0 and 0′ are both zero vectors. Since 0 is a zero vector, 0′+0=0′. Since 0′ is a zero vector, 0′+0=0. Therefore 0′=0. □
Theorem 7 (Uniqueness of additive inverses).
For each v∈V, the additive inverse −v is unique.
Proof.
Suppose w1​ and w2​ are both additive inverses of v. Then w1​=w1​+0=w1​+(v+w2​)=(w1​+v)+w2​=0+w2​=w2​. □

4. Basic Properties of Scalar Multiplication

Theorem 8.
In any vector space V, the following hold:
  1. 0⋅v=0 (the scalar 0 annihilates every vector).

  2. a⋅0=0 (every scalar annihilates the zero vector).

  3. (−1)v=−v.

  4. av=0⟹a=0 or v=0.

Proof.
(1) We have 0⋅v=(0+0)v=0v+0v. Subtracting 0v from both sides yields 0=0v.

(2) Similarly, a⋅0=a(0+0)=a0+a0. Subtracting a0 from both sides gives 0=a0.

(3) We compute v+(−1)v=1⋅v+(−1)v=(1+(−1))v=0v=0. By uniqueness of the additive inverse, (−1)v=−v.

(4) Suppose a=0. Multiplying both sides of av=0 by a−1 gives v=a−10=0. □

5. Subspaces

Definition 9 (Subspace).
A nonempty subset W of a vector space V is called a subspace of V if W is itself a vector space under the operations inherited from V.
Theorem 10 (Subspace criterion).
Let V be a vector space and W⊆V a nonempty subset. Then W is a subspace of V if and only if the following two conditions hold:
  1. u,v∈W⟹u+v∈W.

  2. a∈K,v∈W⟹av∈W.

Proof.
Necessity is immediate. For sufficiency, since W=∅, pick any v∈W. Setting a=0 in condition (2) gives 0=0⋅v∈W. Setting a=−1 gives −v∈W. The remaining axioms (associativity, commutativity, distributivity, etc.) are inherited from V. □
Remark 11.
Conditions (1) and (2) can be consolidated into a single condition: a,b∈K and u,v∈W imply au+bv∈W.
Example 12.
In R3, the set W={(x,y,z)∣x+y+z=0} is a subspace. Indeed, if u=(u1​,u2​,u3​) and v=(v1​,v2​,v3​) lie in W, then (u1​+v1​)+(u2​+v2​)+(u3​+v3​)=0, so u+v∈W. Closure under scalar multiplication is verified similarly.
Example 13.
The set {(x,y,z)∣x+y+z=1} is not a subspace, since the zero vector 0=(0,0,0) does not belong to it.

6. Sum Spaces and Direct Sums

Definition 14 (Sum of subspaces).
Let W1​ and W2​ be subspaces of V. Their sum is defined as
W1​+W2​={w1​+w2​∣w1​∈W1​,w2​∈W2​}.
This is the smallest subspace of V containing W1​∪W2​.
Definition 15 (Direct sum).
If every vector v∈W1​+W2​ can be written uniquely as v=w1​+w2​ with wi​∈Wi​, then the sum is called a direct sum and is denoted W1​⊕W2​.
Theorem 16.
The sum W1​+W2​ is a direct sum if and only if W1​∩W2​={0}.
Proof.
(⇒) Let v∈W1​∩W2​. Then v=v+0=0+v provides two decompositions. By uniqueness, v=0.

(⇐) Suppose v=w1​+w2​=w1′​+w2′​. Then w1​−w1′​=w2′​−w2​∈W1​∩W2​={0}, whence w1​=w1′​ and w2​=w2′​. □
Theorem 17 (Dimension formula for subspaces).
For finite-dimensional subspaces W1​,W2​ of V,
dim(W1​+W2​)=dimW1​+dimW2​−dim(W1​∩W2​).
In particular, if W1​+W2​ is a direct sum, then dim(W1​⊕W2​)=dimW1​+dimW2​.
Linear AlgebraAlgebraTextbookVector SpacesSubspacesDirect Sums
FO
Folio Official

Mathematics "between the lines" — exploring the intuition textbooks leave out, written in LaTeX on Folio.

1 followers·107 articles
Linear Algebra TextbookPart 2 of 13
Previous
Linear Algebra: A Complete Summary of Definitions, Theorems, and Proofs
Next
Linear Independence, Bases, and Dimension

Share your expertise with the world

Write articles with LaTeX support, build your audience, and earn from your knowledge.

Start Writing — It's Free

More from Folio Official

Folio Official·March 1, 2026

Matrices and Representation of Linear Maps

Every linear map between finite-dimensional spaces is uniquely represented by a matrix once bases are chosen, and composition of maps corresponds to matrix multiplication. We derive the change-of-basis formula, characterize invertible matrices, and show that matrix rank equals the rank of the corresponding linear map.

Linear AlgebraAlgebraTextbook
Folio Official·March 1, 2026

The Determinant: A Scalar Invariant of Square Matrices

The determinant of a square matrix is the unique scalar-valued function characterized by alternation and multilinearity. We construct it via the Leibniz formula, prove the product formula det(AB) = det(A)det(B), and derive Cramer's rule for solving linear systems.

Linear AlgebraAlgebraTextbook
Folio Official·March 1, 2026

Jordan Normal Form: Beyond Diagonalization

Every complex square matrix is similar to a Jordan normal form — a block-diagonal matrix of Jordan blocks that is unique up to block ordering. We prove existence and uniqueness, connect the Jordan form to the minimal polynomial, and apply it to compute the matrix exponential e^{tA}.

Linear AlgebraAlgebraTextbook
Folio Official·March 24, 2026

The Singular Value Decomposition: Structure of Arbitrary Matrices

Every real m x n matrix factors as A = U Sigma V^T, where U and V are orthogonal and Sigma is diagonal — this is the singular value decomposition. We prove existence, show how the SVD yields optimal low-rank approximations (Eckart–Young theorem), and construct the Moore–Penrose pseudoinverse for least-squares solutions.

Linear AlgebraAlgebraTextbook