Okay, it’s time to make a big leap forwards in terms of concreteness. The Banach-Tarski paradox makes a strong statement about that *isn’t* true about or . Now, we still don’t really know what is, but if we pretend we know what it is, we can say stuff about . Certainly, has the product topology of — but it has much more than this. It has an origin, for instance, and a distance function, and a way to measure angles. The distance function, in turn, allows us to define spheres and isometries (i. e. distance-preserving maps), which are both part of the statement of Banach-Tarski. All of these are summarized by saying that is a **vector space**.

A **(real)**^{1} **vector space** is an abelian group with an operation we’ll call “addition” together with an action of the on it that we’ll call “scalar multiplication.” For all , the following must hold:

- , where is the identity of .
- (part of the group action condition, but worth explicitly stating)
- ( also acts multiplicatively)
- (same deal)

Let be the additive inverse of : then we can just *define* subtraction via . And, as usual, there are silly little factoids we can prove: for example, .

But really what you need are examples. If you’ve seen vectors in and in school, it’s hopefully clear that they fit this description. If you haven’t, here’s the basics: vectors are drawn as arrows in space and represent the act of moving a certain distance in a certain direction. We write them the same way we write points: means “move units in the -direction and in the -direction.” In this sense, it doesn’t matter where we put the arrows, i. e. they don’t have a fixed start or end-point, but can be moved around , as long as they keep their length and direction. Two arrows are added by putting the start of at the end of , then drawing an arrow from the start of latex \mathbf{v}$ to the end of . Scalar multiplication just corresponds to changing the length of a vector (and flipping it if the scalar is negative). We just call this vector space . At left are the basics of how vectors work there.

Another example is the set of polynomials of degree at most with real coefficients. (Recall that a **polynomial** is a function of the form , and its **degree** is the highest power of with a nonzero term). The sum of two such polynomials still has degree at most , and likewise scalar multiplication gives back a polynomial of the same degree. We don’t even have to bound the degree: the set of *all* polynomials is a vector space, as is the set of **power series**, which are polynomial with possibly infinite degree. Hell, we could go further: the set of sums is a vector space. If is a set, the sets of functions is a vector space, as is the set of bounded functions, continuous functions (if has a topology), or differentiable functions (if or some other set we can do calculus on). Make sure you understand why all of these are true.

The natural morphism of vector spaces is called a **linear function** or **transformation**. As you might expect, it must preserve vector addition and scalar multiplication, which is nicely summed up in the statement . And, of course, an **isomorphism** of vector spaces is just a bijection that’s linear both ways. The name comes from the fact that linear transformations have graphs that are just lines (prove this? you want to show that is the only possible kind of linear transformation from to itself. Likewise, we can talk about **vector subspaces**, which are just subsets of a vector space that are vector spaces with the same operations.

Just like we learned to build topologies from bases and groups from generators, we can build vector spaces from their version of bases. The key is the idea of a **linear combination**, which is a finite sum of scalar multiples of vectors: . The **span** of a set of vectors is the set of their linear combinations, and dually, a set of vectors is **linearly independent** if the only linear combination of them equal to the zero vector has all of the $a_i$ zero as well. Think about this in terms of : the span of a set of vectors is the smallest line, plane, space, or whatever containing them. If they are linearly dependent, then we can write at least one of them in terms of the others, so we can choose a proper subset of them with the same span. In , this means that, for example, 3 vectors only span a plane or line rather than a whole space. A **basis** for a vector space is a linearly independent set of vectors that spans it. (If this is infinite, remember we’re only allowing finite sums, since in general, abelian groups aren’t closed under infinite sums.)

The following things are true about bases, and you should prove them:

- They are maximal linearly independent sets: if you add any other vector to them, they become linearly dependent.
- They are minimal spanning sets: if you remove any vector, their span shrinks.
- Every vector space has a basis, and in particular, every linearly independent set is a subset of a basis, and every spanning set is a superset of one. (You need the axiom of choice here.)
- Any vector in the vector space is a
*unique*linear combination of basis elements. (This is a consequence of every vector being at least one linear combination of spanning elements, and every vector being at most one linear combination of linearly independent vectors.) *Any two bases for a vector space have the same cardinality*. The proof for this is a bit more involved.- Suppose that and are bases with . If $I$ is infinite, write each as a linear combination of , and let be the set of for which has a nonzero coefficient. Then is a union over of finite sets, so its cardinality is at most , and in particular, there’s an $a_{i_0}\in I-\bigcup E_j$ (using the axiom of choice). We can then write as a linear combination of , and then as a linear combination of not including . So the aren’t linearly independent.
- If is finite, instead write each as a linear combination of . We can then create a matrix whose th entry is the coefficient of . The proof then uses a theorem that says that the number of linearly independent rows of a matrix is equal to the number of linearly independent columns. If you’re inquisitive, you should look this up (it’s called “row rank”), but I don’t really have the space for it here.

We call the cardinality of a vector space’s basis its **dimension**. In fact, vector spaces behave somewhat like free groups with respect to their bases: any map of a basis into a vector space extends to a unique linear map from the first vector space to the second. (proof by obviosity: let .) One application of this is that any two -dimensional vector spaces are isomorphic for finite: a bijection between them gives a unique linear map that is then an isomorphism because they’re bases. So is really the only -dimensional real vector space.

One last thing: matrices. “Matrix” is a word almost as scary as “vector”, but they’re really quite simple. A **matrix** is a rectangular array of numbers , where is in the th row and th column, and . We can add and subtract matrices, and there’s a noncommutative multiplication operation that multiplies a and matrix to get a matrix. It’s defined by letting , as shown at left.

This looks kind of stupid, yes, and it’s only barely explained why you learn it in school. Here’s why. If we’ve chosen bases for finite-dimensional vector spaces and , then any linear map can be described by a matrix, and vice versa: just let be the coefficient of . This matrix is . If we write as a matrix, then multiplying the matrix for by the matrix for gives a matrix that is precisely the image of in terms of the basis . Likewise, composing two maps is given by multiplying their matrices, the inverse of a linear isomorphism is given by the inverse of its matrix, and so on. A special case is if we have two different bases for — we can change coordinates from one to the other by just constructing the above matrix.

Is this too easy or too hard? I can definitely talk more about matrices or vectors if the subject is confusing. The last important thing I need to talk about before Banach-Tarski is dependent on this matrix and vector-space machinery: it’s the group of isometries of . I have no problem with taking a slower route there, if need be.

^{1}The “real” is because we’re using the real numbers. In fact, this isn’t the only choice: we can define a vector space over any **field**, which is basically a set where you can do addition, subtraction, multiplication, and division. Outside of abstract algebra, the only really common choice is the complex numbers, but, for example, the rationals are also a field. Elements of the field are called **scalars**.

**4 Comments so far**

Leave a comment

[…] of Euclidean Space 18 11 2010 Finite-dimensional vector spaces come packed with something extra: an inner product. An inner product is a map that multiplies […]

Pingback by Isometries of Euclidean Space « Gracious LivingNovember 18, 2010 @ 21:06[…] one metric in : . I guess we can call this the Euclidean metric. More generally, for any normed vector space, the norm induces a metric: . This metric is then addition-invariant () and satisfies for a […]

Pingback by Metrics « Gracious LivingNovember 26, 2010 @ 12:14[…] is the idea of a map from $mathbb{R}^m$ to $mathbb{R}^n$ being differentiable. I also refer to vectors, but I think that that’s a pretty intuitive concept, in […]

Pingback by Differential Geometry and the Sphere Theorem « Gracious LivingDecember 16, 2010 @ 20:38[…] by saying that the set of functions whose squares are integrable (“ space”) is a vector space with inner product , and the set of functions , and forms an (infinite) orthonormal basis for this […]

Pingback by It is 2011 now. « Gracious LivingJanuary 2, 2011 @ 09:37