Gracious Living


Vector Spaces
November 17, 2010, 17:30
Filed under: Algebra, Math | Tags: , , ,

Okay, it’s time to make a big leap forwards in terms of concreteness.  The Banach-Tarski paradox makes a strong statement about \mathbb{R}^3 that isn’t true about \mathbb{R}^2 or \mathbb{R}.  Now, we still don’t really know what \mathbb{R} is, but if we pretend we know what it is, we can say stuff about \mathbb{R}^3.  Certainly, \mathbb{R}^3 has the product topology of \mathbb{R}\times\mathbb{R}\times\mathbb{R} — but it has much more than this.  It has an origin, for instance, and a distance function, and a way to measure angles.  The distance function, in turn, allows us to define spheres and isometries (i. e. distance-preserving maps), which are both part of the statement of Banach-Tarski.  All of these are summarized by saying that \mathbb{R}^3 is a vector space.

A (real)1 vector space is an abelian group Vwith an operation we’ll call “addition” together with an action of the \mathbb{R} on it that we’ll call “scalar multiplication.”  For all \mathbf{v,w}\in V,a,b\in\mathbb{R}, the following must hold:

  • a(\mathbf{v+w})=a\mathbf{v}+a\mathbf{w}
  • a(\mathbf{0})=\mathbf{0}, where \mathbf{0} is the identity of V.
  • (a+b)\mathbf{v}=a\mathbf{v}+b\mathbf{v} (part of the group action condition, but worth explicitly stating)
  • (ab)\mathbf{v}=a(b\mathbf{v}) (\mathbb{R} also acts multiplicatively)
  • 1\mathbf{v}=\mathbf{v} (same deal)

Let -\mathbf{v} be the additive inverse of \mathbf{v}: then we can just define subtraction via \mathbf{v}-\mathbf{w}=\mathbf{v}+(-\mathbf{w}).  And, as usual, there are silly little factoids we can prove: for example, 0\mathbf{v}=(1-1)\mathbf{v}=\mathbf{v}-\mathbf{v}=\mathbf{0}.

But really what you need are examples.  If you’ve seen vectors in \mathbb{R}^2 and \mathbb{R}^3 in school, it’s hopefully clear that they fit this description.  If you haven’t, here’s the basics: vectors are drawn as arrows in space and represent the act of moving a certain distance in a certain direction.  We write them the same way we write points: (0,4,3) means “move 4 units in the y-direction and 3 in the z-direction.”  In this sense, it doesn’t matter where we put the arrows, i. e. they don’t have a fixed start or end-point, but can be moved around \mathbb{R}^n, as long as they keep their length and direction.  Two arrows are added by putting the start of \mathbf{w} at the end of \mathbf{v}, then drawing an arrow from the start of latex \mathbf{v}$ to the end of \mathbf{w}.  Scalar multiplication just corresponds to changing the length of a vector (and flipping it if the scalar is negative).  We just call this vector space \mathbb{R}^n.  At left are the basics of how vectors work there.

Another example is the set of polynomials of degree at most n with real coefficients.  (Recall that a polynomial is a function of the form a_0+a_1x+a_2x^2+\dotsb+a_nx^n, and its degree is the highest power of x with a nonzero term).  The sum of two such polynomials still has degree at most n, and likewise scalar multiplication gives back a polynomial of the same degree.  We don’t even have to bound the degree: the set of all polynomials is a vector space, as is the set of power series, which are polynomial with possibly infinite degree.  Hell, we could go further: the set of sums \dotsb+a_{-2}x^{-2}+a_{-1}x^{-1}+a_0+a_1x+a_2x^2+\dotsb is a vector space.  If X is a set, the sets of functions X\rightarrow\mathbb{R} is a vector space, as is the set of bounded functions, continuous functions (if X has a topology), or differentiable functions (if X=\mathbb{R} or some other set we can do calculus on).  Make sure you understand why all of these are true.

The natural morphism of vector spaces is called a linear function or transformation.  As you might expect, it must preserve vector addition and scalar multiplication, which is nicely summed up in the statement f(a\mathbb{v}+b\mathbb{w})=af(\mathbb{v})+bf(\mathbb{w}).  And, of course, an isomorphism of vector spaces is just a bijection that’s linear both ways.  The name comes from the fact that linear transformations \mathbb{R}\rightarrow\mathbb{R} have graphs that are just lines (prove this?  you want to show that x\mapsto ax is the only possible kind of linear transformation from \mathbb{R} to itself.  Likewise, we can talk about vector subspaces, which are just subsets of a vector space that are vector spaces with the same operations.

Just like we learned to build topologies from bases and groups from generators, we can build vector spaces from their version of bases.  The key is the idea of a linear combination, which is a finite sum of scalar multiples of vectors: a_1\mathbf{v}_1+a_2\mathbf{v}_2+\dotsb+a_n\mathbf{v}_n.  The span of a set of vectors is the set of their linear combinations, and dually, a set of vectors is linearly independent if the only linear combination of them equal to the zero vector has all of the $a_i$ zero as well.  Think about this in terms of \mathbb{R}^n: the span of a set of vectors is the smallest line, plane, space, or whatever containing them.  If they are linearly dependent, then we can write at least one of them in terms of the others, so we can choose a proper subset of them with the same span.  In \mathbb{R}^n, this means that, for example, 3 vectors only span a plane or line rather than a whole space.  A basis for a vector space is a linearly independent set of vectors that spans it.  (If this is infinite, remember we’re only allowing finite sums, since in general, abelian groups aren’t closed under infinite sums.)

The following things are true about bases, and you should prove them:

  • They are maximal linearly independent sets: if you add any other vector to them, they become linearly dependent.
  • They are minimal spanning sets: if you remove any vector, their span shrinks.
  • Every vector space has a basis, and in particular, every linearly independent set is a subset of a basis, and every spanning set is a superset of one.  (You need the axiom of choice here.)
  • Any vector in the vector space is a unique linear combination of basis elements.  (This is a consequence of every vector being at least one linear combination of spanning elements, and every vector being at most one linear combination of linearly independent vectors.)
  • Any two bases for a vector space have the same cardinality.  The proof for this is a bit more involved.
    • Suppose that \{a_i\}_{i\in I} and \{b_j\}_{j\in J} are bases with |I|>|J|.  If $I$ is infinite, write each b_j as a linear combination of a_i, and let E_j be the set of i\in I for which b_j has a nonzero coefficient.  Then \bigcup_{j\in J} E_j is a union over J of finite sets, so its cardinality is at most |J|, and in particular, there’s an $a_{i_0}\in I-\bigcup E_j$ (using the axiom of choice).  We can then write a_{i_0} as a linear combination of b_j, and then as a linear combination of a_i not including a_{i_0}. So the a_i aren’t linearly independent.
    • If I is finite, instead write each a_i as a linear combination of b_j. We can then create a matrix whose (i,j)th entry is the b_j coefficient of a_i. The proof then uses a theorem that says that the number of linearly independent rows of a matrix is equal to the number of linearly independent columns. If you’re inquisitive, you should look this up (it’s called “row rank”), but I don’t really have the space for it here.

We call the cardinality of a vector space’s basis its dimension.  In fact, vector spaces behave somewhat like free groups with respect to their bases: any map of a basis into a vector space extends to a unique linear map from the first vector space to the second.  (proof by obviosity: let f(a_1\mathbf{v}_1+\dotsb+a_n\mathbf{v}_n)=a_1f(\mathbf{v}_1)+\dotsb+a_nf(\mathbf{v}_n).)  One application of this is that any two n-dimensional vector spaces are isomorphic for n finite: a bijection between them gives a unique linear map that is then an isomorphism because they’re bases.  So \mathbb{R}^n is really the only n-dimensional real vector space.

Matrix multiplication

One last thing: matrices.  “Matrix” is a word almost as scary as “vector”, but they’re really quite simple.  A m\times n matrix is a rectangular array of numbers (a_{ij}), where a_{ij} is in the ith row and jth column, and 1\le i\le m,1\le j\le n.  We can add and subtract m\times n matrices, and there’s a noncommutative multiplication operation that multiplies a m\times n and n\times p matrix to get a m\times p matrix.  It’s defined by letting c_{ij}=\sum_{k=1}^na_{ik}b_{kj}, as shown at left.

This looks kind of stupid, yes, and it’s only barely explained why you learn it in school.  Here’s why.  If we’ve chosen bases (\mathbf{v}_i),(\mathbf{w}_j) for finite-dimensional vector spaces V and W, then any linear map f:V\rightarrow W can be described by a matrix, and vice versa: just let a_{ij} be the \mathbf{w}_i coefficient of f(\mathbf{v}_j).  This matrix is \dim W\times \dim V.  If we write \mathbf{v}\in V as a \dim V\times 1 matrix, then multiplying the matrix for f by the matrix for \mathbf{v} gives a 1\times\dim W matrix that is precisely the image of \mathbf{v} in terms of the basis \mathbf{w}_j.  Likewise, composing two maps is given by multiplying their matrices, the inverse of a linear isomorphism is given by the inverse of its matrix, and so on.  A special case is if we have two different bases for V — we can change coordinates from one to the other by just constructing the above matrix.

Is this too easy or too hard?  I can definitely talk more about matrices or vectors if the subject is confusing.  The last important thing I need to talk about before Banach-Tarski is dependent on this matrix and vector-space machinery: it’s the group SO(3) of isometries of \mathbb{R}^3.  I have no problem with taking a slower route there, if need be.

 


1The “real” is because we’re using the real numbers. In fact, this isn’t the only choice: we can define a vector space over any field, which is basically a set where you can do addition, subtraction, multiplication, and division. Outside of abstract algebra, the only really common choice is the complex numbers, but, for example, the rationals are also a field.  Elements of the field are called scalars.

 

Advertisements

4 Comments so far
Leave a comment

[…] of Euclidean Space 18 11 2010 Finite-dimensional vector spaces come packed with something extra: an inner product.  An inner product is a map that multiplies […]

Pingback by Isometries of Euclidean Space « Gracious Living

[…] one metric in : .  I guess we can call this the Euclidean metric.  More generally, for any normed vector space, the norm induces a metric: .  This metric is then addition-invariant () and satisfies for a […]

Pingback by Metrics « Gracious Living

[…] is the idea of a map from $mathbb{R}^m$ to $mathbb{R}^n$ being differentiable.  I also refer to vectors, but I think that that’s a pretty intuitive concept, in […]

Pingback by Differential Geometry and the Sphere Theorem « Gracious Living

[…] by saying that the set of functions whose squares are integrable (“ space”) is a vector space with inner product , and the set of functions , and forms an (infinite) orthonormal basis for this […]

Pingback by It is 2011 now. « Gracious Living




Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s



%d bloggers like this: