Why define vector spaces axiomatically? From arrows to axioms
In high school, vectors are arrows. In a university course, they become elements of a set satisfying eight axioms. Why the abstraction? Because polynomials, matrices, and functions are all vector spaces — and the axioms are what let us treat them with a single theory.
In most high-school curricula, a vector is an arrow: a quantity with magnitude and direction, drawn tip-to-tail in the plane. You add two arrows by placing them end to end; you scale an arrow by stretching or shrinking it. The calculations are concrete and visual, and they work beautifully — in two and three dimensions.
Then comes the first week of a university linear algebra course, and the arrows vanish. In their place stands a definition bristling with eight axioms:
(commutativity)
(associativity)
There exists such that (zero vector)
For each , there exists with (additive inverse)
(compatibility of scalars)
(identity scalar)
(distributivity I)
(distributivity II)
The natural reaction is: why all this formality? The answer, in a sentence, is that arrows in and are not the only things that deserve to be called vectors.
1 Beyond arrows
The moment you leave , the arrow picture breaks down — yet the algebra survives perfectly.
2 Unpacking the axioms
Eight axioms look like a lot, but they are saying something very natural when read in groups.
Axioms 1–4 say that is an abelian group. Vectors can be added, addition can be reversed, and the order of summation does not matter.
Axioms 5–6 say that scalar multiplication respects the arithmetic of the field . Scaling by and then by is the same as scaling by .
Axioms 7–8 tie addition and scalar multiplication together through the distributive laws. Without them, the two operations would be completely unrelated — and you could not, for instance, factor a scalar out of a sum.
3 What happens if you remove an axiom?
As with group theory, the best way to appreciate an axiom is to see what breaks without it.
Drop distributivity. If need not equal , then scalar multiplication and addition are decoupled. You can no longer factor, you can no longer simplify systems of linear equations, and the entire machinery of row reduction collapses.
Drop commutativity of addition. Structures where addition is non-commutative do exist (non-commutative modules), but the familiar tools of linear algebra — Gaussian elimination, eigenvalue decomposition, determinants — largely cease to function.
4 Subspaces: vector spaces inside vector spaces
Verifying all eight axioms from scratch is tedious. Fortunately, if you already know that is a vector space and you want to show that a subset is also one, there is a shortcut.
(contains the zero vector),
(closed under addition),
(closed under scalar multiplication).
Three conditions instead of eight. The remaining five are inherited automatically from .
5 Why it matters
The axiomatic definition of a vector space is not an exercise in pedantry. It is what allows a single body of theory — bases, dimension, linear maps, eigenvalues — to apply simultaneously to arrow vectors, polynomials, matrices, functions, and any other structure that satisfies the axioms. The moment you prove a theorem about "vector spaces," it becomes a theorem about all of these objects at once. That is the power of abstraction: one proof, infinitely many applications.
Mathematics "between the lines" — exploring the intuition textbooks leave out, written in LaTeX on Folio.