Filed under: Algebra, Math | Tags: abelian groups, algebra, arithmetic, field theory, group theory, Math, ring theory
In which I sort of breeze through a couple of really awesome and really important concepts. Last time, we classified abelian groups — now we’ll see what happens if we require additional structure on the groups. In particular, I’m going to construct and similarly to how the Peano axioms constructed .
Filed under: Algebra, Math | Tags: abelian groups, algebra, commutative, group theory, Math
Wow, it’s been a long time since I’ve written anything on this blog. I’m taking algebraic topology and an algebraic number theory course this semester, and I started reading through Atiyah and MacDonald’s Commutative Algebra over the winter. So I thought I’d continue with a little algebra. The algebra we’ve done thus far has been highly noncommutative, for the most part — we investigated groups like free groups, symmetric groups, matrix groups, and dihedral groups in which the order of operations mattered. As you might expect, with abelian groups, the theory becomes much simpler, and the subject called “commutative algebra” is just the study of abelian groups with extra structure — something like a scalar multiplication, as in the case of vector spaces, or some other operation. But first, we need to understand abelian groups.
When talking about abelian groups specifically, we usually write them additively: the group operation applied to and is , and then we can build expressions like . The proof I give below is due to J. S. Milne, who in turn says it’s similar to Kronecker’s original proof. Of course, I’ve added more detail in places where I thought it was necessary, and taken it out where I thought it wasn’t. There are other, more common proofs, typically using matrices, but I find them unwieldy and inelegant.
Filed under: Algebra, Math | Tags: algebra, geometry, graph theory, group theory, Math, serre
Okay, first post for a while. As I promised quite a while back, let’s prove together that subgroups of free groups are free. It’s surprising that this is nontrivial to prove: just try to come up with some subgroups of and you’ll see what I mean. In fact, using only basic algebraic topology and a bit of graph theory, we can come up with a really simple argument that replaces this one. Perhaps that’s an argument in favor of algebraic topology. But I think this angle is sort of interesting, and it should be a fresh experience for me, at least.
The proof is due to Jean-Pierre “Duh Bear” Serre in his book Trees. A heads up if you track this down — Serre has a really weird way of defining graphs. Fortunately, for this proof at least, a little bit of work translates things into the same language of graphs and digraphs that we saw when talking about Cayley graphs. I review that below the fold. It takes a while to set up the machinery, though the proof itself isn’t too long. To recompense, I’ve left out a couple minor details, which you’re probably able to fill in. If some step doesn’t make sense, work it out — or try to disprove it!
Filed under: Algebra, Math, Uncategorized | Tags: algebra, combinatorial, group theory, Math, symmetry
We’ve seen symmetric groups before. The symmetric group on an arbitrary set, or , is the group of bijections from the set to itself. As usual, we’re only interested in the finite case , which we call the symmetric group on symbols. These are pretty important finite groups, and so I hope you’ll accept my apology for writing a post just about their internal structure. The language we use to talk about symmetric groups ends up popping up all the time.
Filed under: Algebra, Math | Tags: algebra, combinatorial, group theory, Math
We’ve seen a couple of ways to cut a group into pieces. First, we can look at its subgroups, which I visualize as irregular blobs all containing the identity. Under inclusion, these subgroups form a lattice, a partially ordered set in which every two elements have a greatest lower bound (here their intersection) and a least upper bound (here the group generated by their union). The structure of this lattice reveals a lot about the structure of the group and the things attached to it, the fundamental theorem of Galois theory being one powerful example. Second, given one subgroup, we can look at its cosets, which I visualize as parallel slices, and the quotient groups they form.
But cosets are tied to a specific subgroup and aren’t groups themselves, and the lattice of subgroups is in a sense too much information. One of the common problems of math is to find invariants — simpler objects that encode a lot of the data in a given structure and are easier to find. The only real way to get simpler than a group is with numbers, and one sequence of numbers is the class equation, which describes the conjugacy classes of the group. I visualize these as radial slices, like the layers of an onion.
Filed under: Algebra, Math | Tags: abelian groups, algebra, categorical, group theory, Math
Before looking at solvability and group classification, I want to mention a couple more ways of “building” groups. We’ve already seen how to find subgroups, and how to take the quotient by a normal subgroup, and how to find the direct product of a family of groups. Dual to the direct product is the free product, which generalizes the idea of a free group. The amalgamated free product is just a free product that we neutralize on the image of some map. Also, though the only really good example is the group of Euclidean isometries, the semidirect product is worth a more formal look. Finally, though it’s mostly terminology, I define the direct sum, which is useful for studying abelian groups.
Filed under: Algebra, Math | Tags: algebra, geometry, group theory, linear algebra, MaBloWriMo, Math
So I sort of left you hanging last time. We talked about equidecomposability, showed that was paradoxical under its own action on itself, and embedded into . From here, it just becomes a matter of putting all the steps together: first the sphere, then the ball minus its center, then the whole ball.
Filed under: Algebra, Math | Tags: algebra, geometry, group theory, linear algebra, MaBloWriMo, Math, topology
Okay, here’s the moment you’ve been waiting for: the proof of the Banach-Tarski Paradox. Here’s what the paradox says:
Theorem (Banach-Tarski). There are a finite number of disjoint subsets of whose union is the unit ball, and such that we can apply an isometry to each of them and wind up with disjoint sets whose union is a pair of unit balls.
Or “we can cut a unit ball up into a finite number of pieces, rearrange them, and put them back together to make two balls.”
Filed under: Algebra, Math | Tags: algebra, categorical, group theory, MaBloWriMo, Math
Ugh, so, I’ve been really busy today and haven’t had the time to do a Banach-Tarski post. Since I really do want to see MaBloWriMo to the end, I’m going to take a break from the main exposition and quickly introduce something useful. There are a couple major ways of combining two groups into one. The most important one, called the direct product, is analogous to the product of topological spaces. I know this is sort of a wussy post — sorry.
Filed under: Algebra, Math | Tags: algebra, geometry, group theory, MaBloWriMo, Math, topology
Finite-dimensional vector spaces come packed with something extra: an inner product. An inner product is a map that multiplies two vectors and gives you a scalar. It’s usually written with a dot, or with angle brackets. For real vector spaces, we define it to be a map with the following properties:
- Bilinearity: , where are scalars and is another vector, and the same for the second coordinate
- Positive-definiteness: , and it is only equal to when .
(I’m going to stop using boldface for vectors, since it’s usually clear what’s a vector and what’s not.) One of the uses of an inner product is to define the length of a vector: just set . This is only if is, and otherwise it’s always real and positive because the inner product is positive definite. Another use is to define the angle between two nonzero vectors: set . In particular, is right iff . In this case, we say and are orthogonal.
In Euclidean space, the inner product is the dot product: . This is primarily what we’re concerned with today, so we’ll return to abstract inner products another day.