# Gracious Living

Isometries of Euclidean Space
November 18, 2010, 21:06
Filed under: Algebra, Math | Tags: , , , , ,

Finite-dimensional vector spaces $\mathbb{R}^n$ come packed with something extra: an inner product.  An inner product is a map that multiplies two vectors and gives you a scalar.  It’s usually written with a dot, or with angle brackets.  For real vector spaces, we define it to be a map $V\times V\rightarrow\mathbb{R}$ with the following properties:

• Symmetry: $\langle x,y\rangle=\langle y,x\rangle$
• Bilinearity: $\langle ax+bx^\prime,y\rangle=a\langle x,y\rangle+b\langle x^\prime,y\rangle$, where $a,b$ are scalars and $x^\prime$ is another vector, and the same for the second coordinate
• Positive-definiteness: $\langle x,x\rangle\ge 0$, and it is only equal to $0$ when $x=0$.

(I’m going to stop using boldface for vectors, since it’s usually clear what’s a vector and what’s not.)  One of the uses of an inner product is to define the length of a vector: just set $\|x\|=\sqrt{\langle x,x\rangle}$.  This is only $0$ if $x$ is, and otherwise it’s always real and positive because the inner product is positive definite.  Another use is to define the angle between two nonzero vectors: set $\langle \cos\theta=\frac{\langle x,y\rangle}{\|x\|\|y\|}$.  In particular, $\langle \theta$ is right iff $\langle x,y\rangle=0$.  In this case, we say $x$ and $y$ are orthogonal.

In Euclidean space, the inner product is the dot product: $\langle (x_1,x_2,\dotsc,x_n),(y_1,y_2,\dotsc,y_n)=x_1y_1+x_2y_2+\dotsb+x_ny_n$.  This is primarily what we’re concerned with today, so we’ll return to abstract inner products another day.

Last time, we talked about writing linear transformations as matrices.  We can also write dot products as matrices, if we treat vectors as $n\times 1$ matrices:

$x\cdot y=(x_1,\dotsc,x_n)\begin{pmatrix}y_1 \\ \vdots \\ y_n\end{pmatrix}$

We define the transpose of a matrix $A=(a_{ij})$ as $A^T=(a_{ji})$.  Basically, this flips the matrix diagonally, so the rows become columns: the transpose of an $n\times m$ matrix is a $n\times m$ one.  So $x\cdot y=x^Ty$.

But this only works in the standard basis $(e_1=(1,0,\dotsc,0),e_2=(0,1,\dotsc,0),\dotsc,e_n=(0,0,\dotsc,n))$.  Say you want to change coordinates from a basis $v_1,\dotsc,v_n$ to a basis $w_1,\dotsc,w_n$, where $w_i=p_{1i}v_1+\dotsb+p_{ni}v_n$.  We can write these equations as a matrix:

$\displaystyle W= (w_1,\dotsc,w_n) = (v_1, \dotsc, v_n)\begin{pmatrix} p_{11} & p_{12} & \hdots & p_{1n} \\p_{21} & p_{22} & \hdots & p_{2n} \\ \vdots \\ p_{n1} & p_{n2} & \hdots & p_{nn}\end{pmatrix}=VP$

(Remember that each $v_i$ and $w_j$ is actually an $n\times 1$ matrix.  So $W$ and $V$ are actually $n\times n$ matrices.)  Now say we’ve got a vector in $v$ coordinates, $a=a_1v_1+\dotsb + a_nv_n=VA$, where $A=(a_1,\dotsc,a_n)^T$.  If we want to change this into $w$ coordinates, we’re going to end up with $a=b_1w_1+\dotsb+b_nw_n=WB$, where $B=(b_1,\dotsc,b_n)^T$.  Substituting in for $W$, we get $VPB=VA$.  At this point it should be mentioned that not all $n\times n$ matrices have multiplicative inverses, but matrices with linearly independent rows do.  I’ll leave the proof to you unless people are interested enough in matrix algebra.  So since the rows of $V$ form a basis, we can cancel it from both sides, to get $PB=A$, or $B=P^{-1}A$.  ($P$ is invertible because it’s equal to $V^{-1}W$, and the set of invertible matrices form a group.)

So after an annoying calculation which I hate doing and am never going to do again, we’ve found that when we change a basis, the expression for a vector is multiplied by the inverse of the change-of-basis matrix!

But, see, this is a problem.  Because if we change coordinates using $P^{-1}$, then $x^Ty$ becomes $x^TP^TPy$.  (If this is new to you, you should prove that the transpose operation reverses the order of multiplication, and that the inverse of the transpose of a matrix is the transpose of its inverse.)  In general, this is going to be a different number, so things like lengths and angles will be measured differently.

An isometry is a map that preserves the inner product: $\langle x,y\rangle_X=\langle Tx,Ty\rangle_Y$, where $T:X\rightarrow Y$ and $\langle\cdot,\cdot\rangle_X,\langle\cdot,\cdot\rangle_Y$ are the respective inner products.  Again, we only care about $\mathbb{R}^n$ and maps $\mathbb{R}^n\rightarrow\mathbb{R}^n$ for the moment, so we can assume that the inner products are both the ordinary dot product and treat every linear map as the inverse of a change of basis.

So change of basis by $P^{-1}$ will send $x^Ty$ to $x^TP^TPy$.  Clearly, this is only an isometry when $P^T=P^{-1}$.  We call such a matrix an orthogonal matrix, and we call the set of such matrices $O(n)$.  Here are some useful facts about orthogonal matrices:

• The product of two orthogonal matrices is orthogonal.  (Easy to prove from the definition.)
• The inverse of an orthogonal matrix is orthogonal.  (Same.)
• The identity matrix $I$ is orthogonal.  Remember, $I_{ii}=1,I_{ij}=0$ if $i\ne j$.  So $O(n)$ is a group.
• The determinant of a square matrix, $\det(A)$, is a real-valued function that basically says how the matrix, as a linear transformation, changes volume: if we apply $A$ to a unit $n$-cube, we get back an $n$-parallelipiped whose $n$-volume is $\det(A)$.  I don’t have the time to flesh this out right now, but if you remember the definitions for $2$ and $3$ dimensions from high school, you should be fine.  Now, $\det(AB)=\det(A)\det(B)$, so if $P^T=P^{-1}$, then $\det(P^T)=1/\det(P)$.  But the transpose operation also preserves determinants, so this gives $\det(P)=\pm 1$ for orthogonal matrices.
• The subset of $O(n)$ whose determinant is $1$ also forms a group.  It’s called $SO(n)$, the special orthogonal group.  We can consider these to be the orthogonal linear maps that also don’t involve any reflections.
• The image of the standard basis $e_1,e_2,\dotsc,e_n$ under an orthogonal map is called an orthonormal basis.  Orthonormal basis are characterized by $\|o_i\|=1,\langle o_i,o_j\rangle=0$ for $i\ne j$.  The “normal” part means length 1: if only the second condition holds, we might call this an orthogonal basis.  The orthogonal maps are in bijection with the orthonormal bases: just send every orthogonal map to the image of the standard basis under it.

Oh, let me introduce some notation: the unit $n$-sphere, $S^n$, is the set of vectors in $\mathbb{R}^{n+1}$ with norm $1$.  So $S^1$ is the circle, $S^2$ is the regular sphere, et cetera.  $S^0$ is just two points.  The exponent doesn’t mean there’s a product being taken, nor does it really have to do with $\mathbb{R}^{n+1}$ (after all, we could embed an $n$-sphere in $\mathbb{R}^{n+2}$ or whatever).  No, it actually has to do with $S^n$ being what’s called a manifold: every point of $S^n$ has a neighborhood that’s homeomorphic to $\mathbb{R}^n$.  So locally, the circle looks like a line, the sphere like a plane, et cetera.

Let’s look at $O(2)$ to begin with.  The standard basis vectors are just $(1,0)$ and $(0,1)$; an orthogonal map has to send them both to vectors of unit length, which we can imagine as points on the unit circle (fixing the tails of our vectors at the origin).  In fact, once we’ve chosen one of these vectors, we only have two choices for the other, namely, the unit vectors at right angles to it.  If we only look at $SO(2)$, then we only have one choice for the second vector, because the other one will involve a reflection.  Thus, $SO(2)$ is isomorphic to $S^1$!  Every element of $SO(2)$ simply rotates the plane by some angle about the origin!  The elements of $SO(2)$ are, in fact, of the following form:

$\displaystyle P=\begin{pmatrix} \cos\theta & \sin\theta \\ -\sin\theta & \cos\theta\end{pmatrix}$

For $O(3)$, the situation is a bit more complicated.  We’re picking vectors on $S^2$ now.  Once we’ve picked one, we can draw the line in that direction, and then the plane perpendicular to that line.  The other two vectors have to lie in the unit circle in that plane, and the order we pick them in determines whether we’re in $SO(3)$ or not.  So is $SO(3)$ just $S^2\times S^1$?  Not quite.  See, if we start with the vector in the opposite direction, the other two vectors are constrained to the same circle — only now, to land in $SO(3)$, we have to flip their order.  In fact, $SO(3)$ is isomorphic to a thing called $\mathbb{RP}^3$, called real projective space, which is pretty much the topological space of lines in 4-space.  If you want details about this, I suggest you google it: the Wikipedia article is unusually bad, and I don’t know any single good resource about it.  Basically, $SO(3)$ works this way: we pick an axis of rotation, and then we pick an angle to rotate.

These aren’t the only isometries, either: only the linear ones, the isometries of the vector space $\mathbb{R}^n$.  In a broader sense, we can define an isometry of a space as one which doesn’t the distance between points.  In addition to $O(n)$, every element of which has to fix the origin, we also have a translation group that’s isomorphic to $\mathbb{R}^n$.  Namely, each element $(t_1,t_2,\dotsc,t_n)$ sends $(a_1,\dotsc,a_n)$ to $(a_1+t_1,\dotsc,a_n+t_n)$.

If we translate by $t=(t_1,\dotsc,t_n)$, and then rotate by $R$, this is the same as rotating by $R$ first and then translating by $Rt$ (treating $t$ as a vector).  So $O(n)$ (or $SO(n)$) acts on the translation group.  The entire group of isometries is something called a semidirect product of the two.  I haven’t defined the normal product of groups yet, but basically what this is is when you have two groups $G$ and $H$ and an action of $G$ on $H$, the semidirect product is $H\rtimes G=\{(g,h):g\in G,h\in H\}$, with the group operation being $(g_1,h_1)(g_2,h_2)=(g_1g_2,(g_2\cdot h_1)h_2)$.  This has $H$ as a normal subgroup and $G\cong (H\rtimes G)/H$.

The isometry group is $\mathbb{R}^n\rtimes O(n)$.  Every isometry is therefore a composition of a rotation, reflection, and translation, which is a nontrivial statement, but I don’t have space to prove it.  Usually, it’s irrelevant whether you allow reflections or not, and I generally prefer to disallow them, so in what follows, I’ll only be thinking about the special isometry group $\mathbb{R}^n\rtimes SO(n)$.

We’re now all set up to prove the Banach-Tarski Paradox.  I don’t know if I’ll have the energy or time to do this tomorrow, but I might split it in half and do half tomorrow and half on Saturday.