Filed under: Algebra, Math | Tags: algebra, geometry, group theory, MaBloWriMo, Math, topology

Finite-dimensional vector spaces come packed with something extra: an inner product. An **inner product** is a map that multiplies two vectors and gives you a scalar. It’s usually written with a dot, or with angle brackets. For real vector spaces, we define it to be a map with the following properties:

- Symmetry:
- Bilinearity: , where are scalars and is another vector, and the same for the second coordinate
- Positive-definiteness: , and it is only equal to when .

(I’m going to stop using boldface for vectors, since it’s usually clear what’s a vector and what’s not.) One of the uses of an inner product is to define the **length** of a vector: just set . This is only if is, and otherwise it’s always real and positive because the inner product is positive definite. Another use is to define the **angle** between two nonzero vectors: set . In particular, is right iff . In this case, we say and are **orthogonal**.

In Euclidean space, the inner product is the **dot product**: . This is primarily what we’re concerned with today, so we’ll return to abstract inner products another day.

Last time, we talked about writing linear transformations as matrices. We can also write dot products as matrices, if we treat vectors as matrices:

We define the **transpose** of a matrix as . Basically, this flips the matrix diagonally, so the rows become columns: the transpose of an matrix is a one. So .

But this only works in the **standard basis** . Say you want to change coordinates from a basis to a basis , where . We can write these equations as a matrix:

(Remember that each and is actually an matrix. So and are actually matrices.) Now say we’ve got a vector in coordinates, , where . If we want to change this into coordinates, we’re going to end up with , where . Substituting in for , we get . At this point it should be mentioned that *not all matrices have multiplicative inverses*, but matrices with linearly independent rows do. I’ll leave the proof to you unless people are interested enough in matrix algebra. So since the rows of form a basis, we can cancel it from both sides, to get , or . ( is invertible because it’s equal to , and the set of invertible matrices form a group.)

So after an annoying calculation which I hate doing and am never going to do again, we’ve found that when we change a basis, the expression for a vector is multiplied by the *inverse* of the change-of-basis matrix!

But, see, this is a problem. Because if we change coordinates using , then becomes . (If this is new to you, you should prove that the transpose operation reverses the order of multiplication, and that the inverse of the transpose of a matrix is the transpose of its inverse.) In general, this is going to be a different number, so things like lengths and angles will be measured differently.

An **isometry** is a map that preserves the inner product: , where and are the respective inner products. Again, we only care about and maps for the moment, so we can assume that the inner products are both the ordinary dot product and treat every linear map as the inverse of a change of basis.

So change of basis by will send to . Clearly, this is only an isometry when $P^T=P^{-1}$. We call such a matrix an **orthogonal matrix**, and we call the set of such matrices . Here are some useful facts about orthogonal matrices:

- The product of two orthogonal matrices is orthogonal. (Easy to prove from the definition.)
- The inverse of an orthogonal matrix is orthogonal. (Same.)
- The identity matrix is orthogonal. Remember, if . So is a group.
- The
**determinant**of a square matrix, , is a real-valued function that basically says how the matrix, as a linear transformation, changes volume: if we apply to a unit -cube, we get back an -parallelipiped whose -volume is . I don’t have the time to flesh this out right now, but if you remember the definitions for and dimensions from high school, you should be fine. Now, , so if , then . But the transpose operation also preserves determinants, so this gives for orthogonal matrices. - The subset of whose determinant is
*also*forms a group. It’s called , the**special orthogonal group**. We can consider these to be the orthogonal linear maps that also don’t involve any reflections. - The image of the standard basis under an orthogonal map is called an
**orthonormal basis**. Orthonormal basis are characterized by for . The “normal” part means length 1: if only the second condition holds, we might call this an**orthogonal basis**. The orthogonal maps are in bijection with the orthonormal bases: just send every orthogonal map to the image of the standard basis under it.

Oh, let me introduce some notation: the **unit -sphere**, , is the set of vectors in with norm . So is the circle, is the regular sphere, et cetera. is just two points. The exponent doesn’t mean there’s a product being taken, nor does it really have to do with (after all, we could embed an -sphere in or whatever). No, it actually has to do with being what’s called a **manifold**: every point of has a neighborhood that’s homeomorphic to . So locally, the circle looks like a line, the sphere like a plane, et cetera.

Let’s look at to begin with. The standard basis vectors are just and ; an orthogonal map has to send them both to vectors of unit length, which we can imagine as points on the unit circle (fixing the tails of our vectors at the origin). In fact, once we’ve chosen one of these vectors, we only have two choices for the other, namely, the unit vectors at right angles to it. If we only look at , then we only have *one* choice for the second vector, because the other one will involve a reflection. Thus, is isomorphic to $S^1$! Every element of simply rotates the plane by some angle about the origin! The elements of are, in fact, of the following form:

For , the situation is a bit more complicated. We’re picking vectors on $S^2$ now. Once we’ve picked one, we can draw the line in that direction, and then the plane perpendicular to that line. The other two vectors have to lie in the unit circle in that plane, and the order we pick them in determines whether we’re in or not. So is just ? Not quite. See, if we start with the vector in the *opposite* direction, the other two vectors are constrained to the *same* circle — only now, to land in , we have to flip their order. In fact, is isomorphic to a thing called , called **real projective space**, which is pretty much the topological space of *lines in 4-space*. If you want details about this, I suggest you google it: the Wikipedia article is unusually bad, and I don’t know any single good resource about it. *Basically*, works this way: we pick an axis of rotation, and then we pick an angle to rotate.

These aren’t the only isometries, either: only the linear ones, the isometries of the *vector space* . In a broader sense, we can define an **isometry** of a space as one which doesn’t the distance between points. In addition to , every element of which has to fix the origin, we also have a **translation group** that’s isomorphic to . Namely, each element sends to .

If we translate by , and then rotate by , this is the same as rotating by first and then translating by (treating as a vector). So (or ) acts on the translation group. The entire group of isometries is something called a **semidirect product** of the two. I haven’t defined the normal product of groups yet, but basically what this is is when you have two groups and and an action of on , the semidirect product is , with the group operation being . This has as a normal subgroup and $G\cong (H\rtimes G)/H$.

The **isometry group** is . Every isometry is therefore a composition of a rotation, reflection, and translation, which is a nontrivial statement, but I don’t have space to prove it. Usually, it’s irrelevant whether you allow reflections or not, and I generally prefer to disallow them, so in what follows, I’ll only be thinking about the **special isometry group** .

We’re now all set up to prove the Banach-Tarski Paradox. I don’t know if I’ll have the energy or time to do this tomorrow, but I might split it in half and do half tomorrow and half on Saturday.

**1 Comment so far**

Leave a comment

[…] neutralize on the image of some map. Also, though the only really good example is the group of Euclidean isometries, the semidirect product is worth a more formal look. Finally, though it’s mostly […]

Pingback by More Products of Groups « Gracious LivingNovember 27, 2010 @ 22:40