# Gracious Living

Relations
October 25, 2010, 13:38
Filed under: Math, Set Theory | Tags: , ,

A relation from $A$ to $B$ is just a subset of $A\times B$.  For example, $R=\{(a,b)\in\mathbb{R}\times\mathbb{R}:a^2=b^2\}$ is a relation from $\mathbb{R}$ to $\mathbb{R}$ that includes, say, $(0,0),(4,4),(4,-4),(-4,4),(3,-3)$, and so on.  We generally write $aRb$ instead of $(a,b)\in R$.  Except in very few circumstances, we actually only care about three kinds of relations.

The first kind is an order relation.  A partial order is a relation $\le$ from a set $A$ to itself with $x\le x$ for all $x$, with $x\le y$ and $y\le x$ only if $x=y$, and with $x\le z$ whenever $x\le y$ and $y\le z$ for some $y$.  We call these properties “reflexive,” “anti-symmetric,” and “transitive” respectively.  Because this includes the case $x=y$, it is called a “weak” partial order.  There’s also a “strict” notion of a partial order: a relation < with $x for no $x$, with $x and $y for no pair $x,y$, and that is also transitive.  The definitions are pretty much equivalent.  Given a weak partial order $\le$, we get a strict partial order $<$ defined by $x when $x\le y$ and $x\ne y$.  Likewise, we can get a weak partial order from a strict partial order by including the case of equality.

The reason that partial orders are “partial” is that it’s possible to have elements that aren’t comparable: that is, neither $x\le y$ nor $y\le x$.  If we have a partial order on $A$ where every pair of elements is comparable, we call it a total order.  Total orders arrange sets in lines.  Partial orders are more complex, but it’s good to think of them in terms of what’re called “Hasse diagrams.”  These are just graphs with nodes for elements of $A$ and a line going upwards from $x$ to $y$ whenever

$y>x$ and there’s no element $z$ between $y$ and $x$ (so with $y>z>x$).  You can then infer the rest of the relation by using transitivity: $a\le b$ whenever there is some chain of lines going up from $a$ to $b$.  At right is an example for one of the most common partial orders, the subset relation, on the power set of $\{1,2,3\}$.  Another good example of a partial order is on $\mathbb{N}$: $a\le b$ if $\latex a$ divides $\latex b$.

The second big kind of relation is the equivalence relation.  Like partial orders, equivalence relations are reflexive and transitive.  But instead of anti-symmetry, we require them to be symmetric: that is, if $\equiv$ is an equivalence relation on $A$, we must have $a\equiv b$ whenever $b\equiv a$, for all $a,b\in A$.  Equality is the ur-equivalence relation, and the most restrictive one.  Another example is equivalence modulo $n$ on $\mathbb{Z}$: $a\equiv b$ if $b-a$ is divisible by $n$.  For example, for $n=4$, we have $-4\equiv 0\equiv 4\equiv 8$, $3\equiv 7\equiv -1$, $2\equiv 6$, and so on.

Call the set of elements that $a$ is equivalent to under $\equiv$ the equivalence class of $a$, written sometimes as $[a]$.  The general idea is that an equivalence relation splits up its set into equivalence classes.  In fact, this is our first important theorem.  We call two sets disjoint if their intersection is the empty set, and we call a subset of $\mathcal{P}(A)$ a partition if its elements are all disjoint from each other and their union is $A$ itself.  So every element of $A$ is in exactly one of the sets of the partition.  Then we have

THEOREM. Every equivalence relation on $A$ defines a partition whose elements are its equivalence classes, and each partition $P$ on $A$ defines an equivalence relation by $x\equiv y$ if there is an element of $P$ containing both of them.

If you haven’t proved many theorems, try proving this one.  It’s easy!  Both directions are very useful.

The third kind of relation can occur between any two sets, and should be the most familiar.  A function or map from $A$ to $B$ is a relation that contains exactly one pair $(a,b)$ for each $a\in A$.  We use letters like $f$ and $g$ for functions, and write $f(a)=b$ or $f:a\mapsto b$ rather than $afb$.  The element $f(a)$ is called the image of $a$, and the definition of a function ensures that each $a$ has an image, and just one image, in $b$.  (Oh, $a$ is the preimage of $f(a)$).  So a function can really be thought of as a machine that takes in $a$ and prints out its image.

Functions have a lot of terminology.  The set $A$ is called the domain and $B$ the codomain, and we write $f:A\rightarrow B$ to show this relationship.  The set of elements $f(a)\in B$ is called the range of $f$.  If the range is the codomain, so $f$ “fills up” $B$, we say that $f$ is surjective or onto.  If $f$ hits each element of its range just once, so $f(a)=f(b)$ only if $a=b$, then $f$ is injective or one-to-one.  For any two functions $f:A\rightarrow B,g:B\rightarrow C$, we can define their composition, a function $g\circ f:A\rightarrow C$ defined by $g\circ f(a)=g(f(a))$. Note the function $g\circ f$ is given by first applying $f$, then $g$.  Confusing, but hard to stop using once you’re used to it.

Okay, so why do “surjective” and “injective” have such similar names?  Because they obey an important duality property.  Let $f:A\rightarrow B,g,h:B\rightarrow C,j,k:Z\rightarrow A$.  If $f$ is injective, then $f\circ k=f\circ j$ implies $k=j$, so you can cancel $f$ on the left.  If $f$ is surjective, then $g\circ f=h\circ f$, so you can cancel $f$ on the right.  Prove these!  (The best way to really get this is to look at examples where you can’t cancel $f$.  For example, if $f$ isn’t surjective, then $g$ and $h$ could disagree outside its range and still have $g\circ f=h\circ f$.)

If $f$ is both surjective and injective, then it can be cancelled on both sides and we call it bijective.  We can define an inverse function $f^{-1}:A\rightarrow B$ by $f^{-1}(f(a))=a$.  By surjectivity, every element of $B$ is $f(a)$ for some $a$, and by injectivity, this holds for just one $a$, so the inverse function is well defined.  It is also true that $f(f^{-1}(a))=a$.  So this is why cancellation works: if $f\circ g= f\circ h$ for a bijective $f$, then we can just compose with $f^{-1}$ to get $g=f^{-1}\circ f\circ g=f^{-1}\circ f\circ h=h$.  This is very basic category theory!  Another way of looking at bijections is by observing that they pair off elements of their domain with elements of their codomain.

Functions on $\mathbb{R}\rightarrow\mathbb{R}$ are always the best examples because they have graphs.  So

• The function $f(x)=x^3$ are bijective (every number has a cube root!)
• The function $f(x)=e^x$ is injective but not surjective (every number has a log, but the log is always positive!)
• The function $f(x)=x^3-3x$ is surjective but not injective.
• The function $f(x)=x^2$ is neither injective nor surjective.

Here’s the great thing about graphs.  Remember the Horizontal Line Test?  You can test injectivity by seeing if any horizontal line hits your graph more than once?  Well, there’s a Horizontal Line Test for surjectivity, too: just see if any horizontal line hits your graph less than once!  Which is obvious when you think about it, but it illustrates the duality between the concepts in a way they don’t teach at school.

One more concept is needed.  Given a set $A^\prime\subset A$, we can define $f(A^\prime)=\{x\in B:x=f(a)\text{ for some }a\in A^\prime\}$.  Likewise, given $B^\prime\subset B$, we can define $f^{-1}(B^\prime)=\{x\in A:f(x)=b\text{ for some }b\in B^\prime\}$.  The words image and preimage are also used to describe these operations.  Note that $f^{-1}:\mathcal{P}(B)\rightarrow\mathcal{P}(A)$ is defined even if $f$ is not bijective, since $\emptyset\in\mathcal{P}(A)$.  However, it is not the inverse of $f:\mathcal{P}(A)\rightarrow \mathcal{P}(B)$: just take a non-injective function like $f(x)=x^2$ and look at $f(f^{-1}([-1,1]))=f([-1,1])=[0,1]$.

Parting words: the set of functions from $A$ to $B$ is sometimes called $B^A$.  Can you see why?  We will talk more about cardinality later.  Tomorrow, I want to start topology.