Last time we talked about the cosets of a subgroup. We showed that for a normal subgroup, the left and right cosets coincide, and thus you could multiply any two cosets to get a third coset, defining a group operation on the set of equivalence classes of cosets, or quotient group. We proved the powerful First Isomorphism Theorem, which characterized normal subgroups as the kernels of homomorphisms and showed that the quotient of a homomorphism’s domain with its kernel was isomorphic to its image.
Today, we’re going to introduce a new “type” of group and use the First Isomorphism Theorem to develop a new way of representing groups.
BUT FIRST let’s do the random group theory factoid-of-the-day! We defined the kernel of a homomorphism last time as the set of elements that get sent to the identity. Here’s an easy way to check if a homomorphism is injective:
Proposition. A homomorphism is injective iff its kernel is the identity.
Proof. Let be an injective homomorphism. We know that , so nothing else can hit the identity, by injectivity. So . Conversely, if is a homomorphism with the identity for its kernel, and , then , so . Thus, is injective.
On to the main topic. Let’s start with some arbitrary set: . The free group on that set is the set of finite formal products of its elements and their inverses, with no interaction between them other than “adjacent inverses have to cancel” — like . Such a product is called a (reduced) word. We define the product of two words to be the word you get when you write them next to each other, then cancel adjacent inverses: so . This product is a group operation, with identity given by the empty word, which I’ll just write , and with , where are symbols in the set and are their powers.
We’ll write the free group on a set as . It’s clearly very infinite for nonempty , but still quite well-behaved: given an unreduced word in the group, it’s easy to tell what group element that word represents: just scan it for adjacent inverses and delete them. Moreover, if and have the same cardinality, then , so in general, we’ll write free groups on finite sets of cardinality as .
We can also look at free groups through a universal property. Let be a group, and let be a set, with a map . Then there is a unique group homomorphism that restricts to on . Thus, the homomorphisms from to biject with the functions from to . Why should you care? Because when we study other algebraic structures besides groups, this very same property can characterize “free objects” in them, too. We call the set a basis for the free object constructed in this way.
Theorem. The definitions of “free group” given above are equivalent.
Proof. Given a map , just define for , and recursively. Pretty much by definition, this is clearly the only possible homomorphism on that restricts to . Look at what’s happening here: we’re actually defining a map on the monoid of unreduced words, but then in order to make this a group homomorphism, its kernel must include all words of the form and so on. Then we just declare it to be a group homomorphism, automatically adding more stuff to the kernel. For example, the map gives a homomorphism of into , with the kernel including words like and .
Conversely, let be a group containing that has the universal property for the set , and let be the actual free group on . Then since , the inclusion map gives a group homomorphism . Likewise, since , we get another homomorphism . I’ll leave it to you to show that they are inverses, and the groups are isomorphic.
Now, Cayley’s Theorem allowed us to write every group as a subset of a symmetric group: this is the regular or permutation representation of a group. Here’s a theorem that basically works the opposite way:
Theorem. Every group is isomorphic to a quotient of a free group.
Proof. Let be a set of generators for a group . The inclusion of into gives us a homomorphism . By the First Isomorphism Theorem, .
Of course, every group has a set of generators: at most, it’s the set itself. We call a group finitely generated if a finite set of generators exists. In general, the above theorem is only used on finitely generated groups: for bigger groups, there tend to be more efficient ways to talk about things.
The kernel of is a subgroup of . As it turns out, every subgroup of a free group is free (we’ll prove this tomorrow, when I can do the cool proof instead of the boring one). So is free on some subset of . We call the elements of such a subset the relations of . These are the words of generators that get cancelled completely in . This is all we need to determine the structure of : just like injective homomorphisms are characterized by their behavior at the identity, we can also characterize all the other cancellations in by the things that get cancelled to the identity.
This presentation is called the generators and relations presentation of . Generally it’s written as , or as , where the are generators and the are relations. If a group has finite generators and relations, it is finitely presented.
As an example, , and . The dihedral group , as you should prove if you haven’t already. Symmetric groups have their own theory that we haven’t really investigated: can you find generators and relations for them?
Sadly, generators and relations don’t actually tell us all that much. The word problem for a given group is to find a systematic way to tell whether two words in the group’s generators represent the same element. It is, in general, unsolvable from the group’s presentation: the proof is really out of our way for now, but Wikipedia proves it and gives an example of a group with only 10 generators and an unsolvable word problem. Fortunately, all the groups we deal with are nice enough that we can solve their word problems: in particular, every finite group has a solvable word problem. We actually solved the word problem for dihedral groups back when we introduced them, by saying that you can move all the rotations to one side. When we go into detail about symmetric groups, we will solve their word problem, too.
I lied a tiny bit last time, in saying this was all we needed to do the Banach-Tarski Paradox. It’s all of the abstract stuff we need, but to go on we’ll need to talk a little more about group actions, the vector space structure of , and , its group of vector space isometries. Still, exciting times are ahead.
5 Comments so far
Leave a comment