Gracious Living


Rings, Integral Domains, and Fields
February 2, 2011, 19:36
Filed under: Algebra, Math | Tags: , , , , , ,

In which I sort of breeze through a couple of really awesome and really important concepts.  Last time, we classified abelian groups — now we’ll see what happens if we require additional structure on the groups.  In particular, I’m going to construct \mathbb{Z} and \mathbb{Q} similarly to how the Peano axioms constructed \mathbb{N}.

Continue reading



The Classification Theorem for Finitely Generated Abelian Groups
January 29, 2011, 16:23
Filed under: Algebra, Math | Tags: , , , ,

Wow, it’s been a long time since I’ve written anything on this blog.  I’m taking algebraic topology and an algebraic number theory course this semester, and I started reading through Atiyah and MacDonald’s Commutative Algebra over the winter.  So I thought I’d continue with a little algebra.  The algebra we’ve done thus far has been highly noncommutative, for the most part — we investigated groups like free groups, symmetric groups, matrix groups, and dihedral groups in which the order of operations mattered.  As you might expect, with abelian groups, the theory becomes much simpler, and the subject called “commutative algebra” is just the study of abelian groups with extra structure — something like a scalar multiplication, as in the case of vector spaces, or some other operation.  But first, we need to understand abelian groups.

When talking about abelian groups specifically, we usually write them additively: the group operation applied to a and b is a+b, and then we can build expressions like 3a+2b.  The proof I give below is due to J. S. Milne, who in turn says it’s similar to Kronecker’s original proof.  Of course, I’ve added more detail in places where I thought it was necessary, and taken it out where I thought it wasn’t.  There are other, more common proofs, typically using matrices, but I find them unwieldy and inelegant.

Continue reading



Subgroups of Free Groups are Free
December 28, 2010, 07:32
Filed under: Algebra, Math | Tags: , , , , ,

Okay, first post for a while.  As I promised quite a while back, let’s prove together that subgroups of free groups are free.  It’s surprising that this is nontrivial to prove: just try to come up with some subgroups of F_2 and you’ll see what I mean.  In fact, using only basic algebraic topology and a bit of graph theory, we can come up with a really simple argument that replaces this one.  Perhaps that’s an argument in favor of algebraic topology.  But I think this angle is sort of interesting, and it should be a fresh experience for me, at least.

The proof is due to Jean-Pierre “Duh Bear” Serre in his book Trees.  A heads up if you track this down — Serre has a really weird way of defining graphs.  Fortunately, for this proof at least, a little bit of work translates things into the same language of graphs and digraphs that we saw when talking about Cayley graphs.  I review that below the fold.  It takes a while to set up the machinery, though the proof itself isn’t too long.  To recompense, I’ve left out a couple minor details, which you’re probably able to fill in.  If some step doesn’t make sense, work it out — or try to disprove it!

Continue reading



Symmetric Groups
December 18, 2010, 11:36
Filed under: Algebra, Math, Uncategorized | Tags: , , , ,

We’ve seen symmetric groups before.  The symmetric group on an arbitrary set, S_X or {\rm Sym}(X), is the group of bijections from the set to itself.  As usual, we’re only interested in the finite case S_n, which we call the symmetric group on n symbols.  These are pretty important finite groups, and so I hope you’ll accept my apology for writing a post just about their internal structure.  The language we use to talk about symmetric groups ends up popping up all the time.

Continue reading



Orbits, Stabilizers, and Conjugacy Classes
December 1, 2010, 23:45
Filed under: Algebra, Math | Tags: , , ,

We’ve seen a couple of ways to cut a group into pieces. First, we can look at its subgroups, which I visualize as irregular blobs all containing the identity. Under inclusion, these subgroups form a lattice, a partially ordered set in which every two elements have a greatest lower bound (here their intersection) and a least upper bound (here the group generated by their union). The structure of this lattice reveals a lot about the structure of the group and the things attached to it, the fundamental theorem of Galois theory being one powerful example. Second, given one subgroup, we can look at its cosets, which I visualize as parallel slices, and the quotient groups they form.

But cosets are tied to a specific subgroup and aren’t groups themselves, and the lattice of subgroups is in a sense too much information. One of the common problems of math is to find invariants — simpler objects that encode a lot of the data in a given structure and are easier to find. The only real way to get simpler than a group is with numbers, and one sequence of numbers is the class equation, which describes the conjugacy classes of the group. I visualize these as radial slices, like the layers of an onion.

Continue reading



More Products of Groups
November 27, 2010, 22:40
Filed under: Algebra, Math | Tags: , , , ,

Before looking at solvability and group classification, I want to mention a couple more ways of “building” groups.  We’ve already seen how to find subgroups, and how to take the quotient by a normal subgroup, and how to find the direct product of a family of groups.  Dual to the direct product is the free product, which generalizes the idea of a free group.  The amalgamated free product is just a free product that we neutralize on the image of some map.  Also, though the only really good example is the group E(n) of Euclidean isometries, the semidirect product is worth a more formal look.  Finally, though it’s mostly terminology, I define the direct sum, which is useful for studying abelian groups.

Continue reading



Banach-Tarski part 2
November 21, 2010, 23:30
Filed under: Algebra, Math | Tags: , , , , ,

So I sort of left you hanging last time.  We talked about equidecomposability, showed that F_2 was paradoxical under its own action on itself, and embedded F_2 into SO(3).  From here, it just becomes a matter of putting all the steps together: first the sphere, then the ball minus its center, then the whole ball.

Continue reading



Banach-Tarski part 1
November 21, 2010, 06:33
Filed under: Algebra, Math | Tags: , , , , , ,

Okay, here’s the moment you’ve been waiting for: the proof of the Banach-Tarski Paradox.  Here’s what the paradox says:

Theorem (Banach-Tarski).  There are a finite number of disjoint subsets of \mathbb{R}^3 whose union is the unit ball, and such that we can apply an isometry to each of them and wind up with disjoint sets whose union is a pair of unit balls.

Or “we can cut a unit ball up into a finite number of pieces, rearrange them, and put them back together to make two balls.”

Continue reading



Products of Groups
November 19, 2010, 22:49
Filed under: Algebra, Math | Tags: , , , ,

Ugh, so, I’ve been really busy today and haven’t had the time to do a Banach-Tarski post.  Since I really do want to see MaBloWriMo to the end, I’m going to take a break from the main exposition and quickly introduce something useful.  There are a couple major ways of combining two groups into one.  The most important one, called the direct product, is analogous to the product of topological spaces.  I know this is sort of a wussy post — sorry.

Continue reading



Isometries of Euclidean Space
November 18, 2010, 21:06
Filed under: Algebra, Math | Tags: , , , , ,

Finite-dimensional vector spaces \mathbb{R}^n come packed with something extra: an inner product.  An inner product is a map that multiplies two vectors and gives you a scalar.  It’s usually written with a dot, or with angle brackets.  For real vector spaces, we define it to be a map V\times V\rightarrow\mathbb{R} with the following properties:

  • Symmetry: \langle x,y\rangle=\langle y,x\rangle
  • Bilinearity: \langle ax+bx^\prime,y\rangle=a\langle x,y\rangle+b\langle x^\prime,y\rangle, where a,b are scalars and x^\prime is another vector, and the same for the second coordinate
  • Positive-definiteness: \langle x,x\rangle\ge 0, and it is only equal to 0 when x=0.

(I’m going to stop using boldface for vectors, since it’s usually clear what’s a vector and what’s not.)  One of the uses of an inner product is to define the length of a vector: just set \|x\|=\sqrt{\langle x,x\rangle}.  This is only 0 if x is, and otherwise it’s always real and positive because the inner product is positive definite.  Another use is to define the angle between two nonzero vectors: set \langle \cos\theta=\frac{\langle x,y\rangle}{\|x\|\|y\|}.  In particular, \langle \theta is right iff \langle x,y\rangle=0.  In this case, we say x and y are orthogonal.

In Euclidean space, the inner product is the dot product: \langle (x_1,x_2,\dotsc,x_n),(y_1,y_2,\dotsc,y_n)=x_1y_1+x_2y_2+\dotsb+x_ny_n.  This is primarily what we’re concerned with today, so we’ll return to abstract inner products another day.

Continue reading