# What are rings and fields?

## 1. Introduction

In high school you probably have studied many arithmetic laws of real numbers and how to apply them in order to compute or solve equations: $ab = ba$, $(a + b) + c = a + (b+c)$, $(a+b)^2 = a^2 + 2ab + b^2$ and so on. In computational algorithms, it is often the case that variables are not real numbers but some other objects, but we still apply some operations to them: e. g. one can add and multiply matrices, one can add and multiply 32-bit integers (with overflow), one can do bitwise logical operations with integers etc. It would make our algorithms much easier to analyze (and therefore easier to optimize for performance etc.) if we could choose the operations on these objects so that they satisfy the same arithmetic laws that we have learned for real numbers.

For example, suppose you are using 8-bit unsigned integer variables. An 8-bit integer has 256 potential values: the set of all 8-bit unsigned integers is $S=\{0,1,\ldots, 255\}$. The results of addition, subtraction and multiplication on 8-bit integers diverge from standard arithmetics due to overflows: if the result of an operation would be negative or exceed 255 then it is “wrapped around” to make it fit into 8 bits. In mathematical terms: addition, subtraction and multiplication are done modulo 256. E. g.

- $1 + 255 = 0$
- $17 - 18 = 255$
- $17 \times 16 = 16$

In situations where overflows can occur, a natural question emerges. Can we still use the simplification formulae from high school? For example, if a compiler encounters an expression of 8-bit integers like $a \times b + a \times c$, can it optimize it into $a \times (b+c)$? Or, for another example, consider this integer swapping algorithm:

x = x + y y = x - y x = x - y

In case of no overflow, it is easy to see that this algorithm swaps the values of `x`

and `y`

. Indeed, if we let $x_0$ and $y_0$ denote the original values of the variables `x`

and `y`

, respectively, then on the second line, `y`

is assigned the value $x - y = (x_0 + y_0) - y_0 = x_0$ and on the third line, `x`

is assigned the value $x - y = (x_0 + y_0) - x_0 = y_0$. For this algorithm to work in case of overflow, it would suffice if the identities $(x_0 + y_0) - y_0 = x_0$ and $(x_0 + y_0) - x_0 = y_0$ held also in case of overflow.

It turns out that formulae like $a \times b + a \times c = a \times (b + c)$ and $(x_0 + y_0) - y_0 = x_0$ and $(x_0 + y_0) - x_0 = y_0$ indeed hold even in case of overflow. Therefore we have an 8-bit integer swapping algorithm which does not need extra memory — overflows are not an obstacle but work for our benefit. We shall see soon the reason why such formulae hold: the set of all 8-bit integers, equipped with addition and multiplication modulo $256$, is a *ring* — an algebraic concept we shall define precisely later on in this lesson. Basically, a ring is a set with operations that behave like the usual addition, subtraction and multiplication of numbers. Especially nicely behaving rings are called *fields* — that's what the following lessons will be about.

The addition, subtraction and multiplication of 8-bit integers are not the same operations as the usual addition, subtraction and multiplication of numbers, because they give different results in case of overflow, so denoting them by the same symbols $+$, $-$ and $\times$ as the usual operations on numbers is only a matter of convention. We may instead take some completely different operations on 8-bit integers and ask the same question: do they satisfy the same rules as the usual operations on real numbers? For example, consider the following operations:

- bitwise XOR operator $\oplus$,
- bitwise AND operator $\wedge$.

It turns out that if we take $\oplus$ to be the substitute for both addition and subtraction and $\wedge$ to be the substitute for multiplication, then the arithmetic laws considered in the examples above are satisfied: $(a \wedge b) \oplus (a \wedge c) = a \wedge (b \oplus c)$, $(x_0 \oplus y_0) \oplus y_0 = x_0$ and $(x_0 \oplus y_0) \oplus x_0 = y_0$. This is because the set of 8-bit integers with the operations $\oplus$ and $\wedge$ will turn out to be a ring as well. So, for example, replacing the $+$ and $-$ in the swap algorithm above by XOR gives us another integer swapping algorithm.

## 2. Operations on a set

Before going to the definitions of ring and field — sets equipped with operations that satisfy familiar arithmetic laws — let us make more precise what kind of *operations* we are talking about.

Continuing with our example of 8-bit unsigned integers, here's a list of some arithmetic operations connected to 8-bit numbers:

- addition (modulo 256) $+$,
- bitwise XOR $\oplus$,
- multiplication (modulo 256) $\times$,
- integer division $\operatorname{div}$,
- floating-point division $/$,
- bitwise NOT $\neg$.

The first five are **binary** operations, i. e. one needs two arguments to make an operation; bitwise NOT is an **unary** operation, i. e. takes one argument.

Floating-point division is different from the others as it produces a floating point number which is not an 8-bit number. In formal terms, the set of 8-bit integers is not *closed* under the standard division.

A set $S$ is said to be **closed** under unary operation $\operatorname{op}$ if for any $a\in S$, the value $\operatorname{op} a$ is well-defined and belongs to $S$.

Let $S$ be a set and let $\operatorname{op}$ a binary operation that works on the element pairs of $S$. We say that $S$ is **closed** under the operation $\operatorname{op}$ if for all element pairs $a,b\in S$ the value $a \operatorname{op} b$ is well defined and belongs to $S$.

Integer division $a \operatorname{div} b$ produces an $8$-bit integer but is well defined only for all pairs $a,b$ of $8$-bit unsigned integers such that $b\neq 0$. Therefore, the set of $8$-bit unsigned integers is not closed under the integer division operation either (however, the set of non-zero $8$-bit integers is). The set of $8$-bit unsigned integers is closed under all the other four operations in the list above.

In the definitions of algebraic structures like ring, field and group, that we will encounter in this topic, the algebraic structure is required to be closed under the operations that define it. This guarantees that the result is always well-defined when the operation is applied multiple times like $(a \operatorname{op} b) \operatorname{op} c$ etc.

## 3. Rings

A **ring** is a set $S$ together with binary operations $+$ and $\cdot$ satisfying the properties below.

- The set $S$ is closed under $+$ and $\cdot$, i. e. for any two elements $a, b \in S$, the sum $a+b$ and product $a\cdot b$ are defined and belong to $S$.
- Addition is
**commutative**: $a + b = b + a$. - Addition and multiplication are
**associative**:

\begin{align*} (a + b) + c = a + (b + c)\enspace,\\ (a \cdot b) \cdot c = a \cdot (b \cdot c)\enspace. \end{align*} - Multiplication is
**distributive**over addition:

\begin{align*} (a+b)\cdot c &= a\cdot c + b\cdot c\enspace,\\ a\cdot (b + c) &= a\cdot c + a\cdot c\enspace. \end{align*} - There exists an element $0 \in S$, called the
**zero element**, such that $a + 0 = a$ for all $a$. - There exists an element $1 \in S$, called the (multiplicative)
**identity element**, such that $a \cdot 1 = 1 \cdot a = a$ for all $a$. - For every $a \in S$, there exists $b \in S$ which satisfies $a + b = 0$; that object $b$ will be denoted $-a$ and called the
**opposite element**of $a$.

A ring where multiplication is commutative, i. e. where the equality $a \cdot b = b \cdot a$ always holds, is called a **commutative ring**.

A ring is a triple of a set and two operations, usually denoted like $(S; +, \cdot)$. The symbols $+$ and $\cdot$ are common for denoting the two ring operations, but if one considers two different rings on the same set $S$ (i. e. two pairs of operations on $S$), then of course one must use different symbols. If it is clear which operations are meant then instead of $(S; +, \cdot)$ one can simply write $S$.

In a ring, subtraction is automatically defined by the formula $a - b := a + (-b)$.

The properties 1-7 in the definition are called the **axioms** of ring. The axioms of ring together with the requirement of commutativity of multiplication form the axioms of a commutative ring.

**Examples.**

- It is obvious that the set of all real numbers $\mathbb{R}$ is a commutative ring with its usual addition and multiplication.
- The set $\mathbb Z_n = \{0,\ 1,\ \ldots,\ n-1\}$ of all possible remainders modulo $n$, with addition and multiplication modulo $n$, is a commutative ring. Equivalently, we can say that $\mathbb Z_n$ is the set of
*residue classes*modulo $n$: $\mathbb Z_n = \{\bar 0,\ \bar 1,\ \ldots,\ \overline{n-1}\}$ where $\bar a$ denotes the set of all integers that are equivalent to $a$ modulo $n$ (e. g., if $n = 5$ then $\bar 2 = \{\ldots,\ -8,\ -3,\ 2,\ 7,\ \ldots\}$) with addition and multiplication defined by $\bar a + \bar b = \overline{a + b}$ and $\bar a \cdot \bar b = \overline{a \cdot b}$ for any $a,\ b \in \mathbb Z$. The ring $\mathbb Z_n$ is called the**residue ring**or the**residue class ring**modulo $n$. - The set of 8-bit integers equipped with addition, subtraction and multiplication following the usual “wrap-around” overflow rule is a commutative ring because it is nothing else but the ring $\mathbb Z_{256}$. Indeed, addition and multiplication modulo 256 are nothing else but the addition and multiplication in $\mathbb Z_{256}$ and subtraction of 8-bit integers $a-b$ is also equivalent to $a + (-b)$ where $-b$ is the opposite element in $\mathbb Z_{256}$. (How to compute the opposite element in this ring? For example, what is $-9$, the opposite of the unsigned 8-bit integer $9$? Answer )
- $(\mathbb Z_{256}; \oplus, \wedge)$, i. e. the set of 8-bit integers, where bitwise XOR is taken to be the ring's addition operation and bitwise AND is taken to be the multiplication, is a commutative ring as well. Proof (sketch)
- The set of all natural numbers $\mathbb{N}$ (here we also regard $0$ as a natural number) with its usual addition and multiplication is
*not*a commutative ring: it satisfies all the axioms of commutative ring except the last one. Indeed, the zero element of $\mathbb{N}$ can only be the natural number $0$ (no other natural number $n$ satisfies the property $a+n = a$ for all natural numbers $a$) but then e. g. number $3$ has no opposite element since there exists no natural number $b$ such that $3+b=0$. - The set $\mathrm{Mat}_n(\mathbb R)$ of all $n \times n$ square matrices of real numbers is a ring, but it is not a commutative ring if $n > 1$, because the commutativity of multiplication $ab=ba$ does not hold for all matrices.

For each of the following algebraic structures, determine whether it is a commutative ring.

- $\mathbb Z$, the set of all integers, with usual addition and multiplication. Answer.
- The set of all even integers, with usual addition and multiplication. Answer.
- The algebraic structure $(L; +, \cdot)$ where $L = \{a,\ b\}$ is a set of two letters and the operations are defined as follows: $\begin{aligned} a + a &= a,&\quad a + b &= b,&\quad b + a &= b,&\quad b + b &= a,\\ a \cdot a &= a,&\quad a \cdot b &= a,&\quad b \cdot a &= a,&\quad b \cdot b &= b. \end{aligned}$ Answer.

When looking at the fifth ring axiom that defines the zero element of a ring, one may have the question whether this definition always specifies the zero element uniquely — perhaps there can be multiple elements in a ring satisfying the definition of zero element (i. e. satisfying the equations $a + x = a$ for all $a$)? Then we would have the problem which one of them is meant when we write $0$. The same question can be asked for the identity element $1$. However, a simple proof shows that this is never an issue:

**Proposition (Uniqueness of zero and identity).** The zero and identity element of a ring are uniquely determined.

Similarly, for each element, its opposite element is uniquely determined.

**Proposition (Uniqueness of opposite element).** Every element of a ring has exactly one opposite element.

By definition, $+$ and $\cdot$ are only binary operations. However, it can be proven that associativity implies that in a sum of many terms we can arrange braces any way we like: e. g. $a + (b + (c + d)) = (a + b) + (c + d) = ((a + b) + c) + d$. Therefore we don't need to write braces at all in expressions containing only $+$ or only $\cdot$: e. g. we write $a + b + c + d$ instead of $((a + b) + c) + d$. Analogously, we write $a \cdot b \cdot c$ instead of $(a \cdot b) \cdot c$ etc.

The usual notations of multiplication by natural number and powers are also often employed, e. g. $3a := a + a + a$ and $a^3 := a \cdot a \cdot a$. One also denotes multiples of the identity element with integers: e. g. we can define the element $4$ in a ring by $4 := 1 + 1 + 1 + 1$ where the $1$'s on the right denote the identity element of the ring. So we can say, for example, that in the ring $\mathbb Z_2$, one has $0 = 4$.

The axioms of commutative ring are sufficient to make many arithmetic formulae hold in any commutative ring, for example the following simple result.

**Proposition (multiplying by zero).** In any ring, the equality $0 \cdot x = 0$ holds for all elements $x$.

Using the ring axioms, prove that the identities $(x+y) - y = x$ and $(x+y) - x = y$ hold in any ring. This proves that the integer swapping algorithms given in the introduction section work.

Using the axioms, show that the identity $(x+y)^2 = x^2 + 2xy + y^2$ holds in any commutative ring.

Does it hold in any ring? (Don't simply guess, but justify your answer too — either prove it holds or find a counterexample.)

## 4. Fields

In a ring one can do three of the four basic operations that are defined on numbers: addition, subtraction and multiplication, but in general not division. In the set of real numbers, one can divide any number by any non-zero number, but in some rings this not possible. For example, in the ring $\mathbb Z$ there exists no element $x \in \mathbb Z$ such that $4x = 3$, so in the ring $\mathbb Z$ one cannot divide $3$ by $4$. In fact, you cannot even divide $1$ by $4$ in $\mathbb Z$, i. e. the number $4$ has no inverse in $\mathbb Z$. We will define shortly a subclass of rings that are more similar to $\mathbb R$ in that respect — the *fields*.

Let $a$ be an element of a ring $S$. If there exists an element $b \in S$ which satisfies $ab = ba = 1$ then the element $b$ will be denoted $a^{-1}$ and called the **multiplicative inverse**, or simply the **inverse** of $a$. Such an element $a$ is called **invertible**.

**Proposition (Uniqueness of inverse element).** Every element of a ring has at most one multiplicative inverse.

A commutative ring where $0 \neq 1$ and where every element except the zero element has an inverse is called a **field**.

In a field, you can divide any element by any non-zero element, just as in $\mathbb R$: division is defined automatically by the formula $a/b := a \cdot b^{-1}$. The requirement that the ring be commutative ensures us that the values of division on the left $b^{-1} \cdot a$ and division on the right $a \cdot b^{-1}$ coincide (this need not true e. g. in case $a$ and $b$ are matrices). The requirement $0 \neq 1$ excludes the trivial ring consisting of only one element that is both its zero and identity.

Prove that if $0 = 1$ in a ring then the ring contains only one element. Solution.

So, for example, the ring $\mathbb R$ of all real numbers and the ring $\mathbb Q$ of all rational numbers are fields but the ring $\mathbb Z$ of all integers is not.

Is the algebraic structure $L$ considered in the previous exercise a field? Answer.

## 5. Why finite fields?

Coming back to the statement at the beginning of this lesson that we would like to have variables in our algorithm belong to belong to a ring (or even better, a field) — well, that might be complicated in real programs in case of infinite rings. E. g. storing all the digits of a real number which is an infinite fraction would take infinite time, and that's why the finite algebraic structure of floating-point numbers is commonly used instead — however, the set of floating-point numbers has the drawback of not being a ring: e. g. $(a + b) + c \neq a + (b+c)$ if $b$ and $c$ are near the floating point precision and $a$ is moderately large number; this makes it difficult for a compiler to optimize floating-point expressions and so on.

But if a set $S$ has a finite number of elements, then in computer science terms, an algebraic structure $(S;\ +,\ \cdot)$ means essentially a data structure with two operations. In an object-oriented language, we could implement a ring as a class with methods for addition and multiplication of instances of the class. For example, you could write a class for ring like $\mathbb Z_n$ (although this might be an overkill if $n$ is small enough that the numbers fit in a primitive type); if you are adding, subtracting or multiplying $32$-bit integers in your program, you are actually making operations in the ring $\mathbb Z_{2^{32}}$ (even in case of overflow, the operations behave as in $\mathbb Z_{2^{32}}$). So in practice, algebraic structures containing a finite number of elements are especially interesting, as they can be readily implemented on computers.

A field containing a finite number of elements is called a **finite field**. The fields most familiar to us — $\mathbb R$ and $\mathbb Q$ — are unfortunately infinite. Shortly we shall meet an important class of examples of finite fields — the residue class fields; later on in this chapter we'll describe all finite fields.

## 6. Residue class fields

The residue class rings $\mathbb{Z}_n$ are commutative rings of finite size. So if we are looking for finite fields, a promising idea is to start checking whether the rings $\mathbb{Z}_n$ satisfy the definition of field as well.

The residue ring $\mathbb{Z}_2=\left\{0,1\right\}$ is a finite field as the inverse of $1$ is $1$. Residue ring $\mathbb{Z}_3=\left\{0,1,2\right\}$ is also a finite field, as $2^{-1}$ is $1$: \[\begin{align*} 2\cdot 2 = 4 = 4\operatorname{mod} 3 = 1\enspace. \end{align*}\] With the ring $\mathbb{Z}_4$ we encounter some problems. Namely, the element $2$ does not have an inverse: $2\cdot 1 = 2$, $2\cdot 2 = 0$ and $ 2\cdot 3 = 2$.

For which numbers $n$ is $\mathbb{Z}_n$ a field? By the field axioms any non-zero element must have inverse. However, if $n$ is a composite number then one of its factors $p$ cannot have an inverse. Indeed, let $p x \equiv 1\ (\operatorname{mod} n)$; then \[\begin{align*} 0 = p x \operatorname{mod} p = (p x \operatorname{mod} n) \operatorname{mod} p = 1 \operatorname{mod} p = 1 \end{align*}\] and we have derived a contradiction. In fact, the following theorem holds.

Find a non-zero non-invertible element in $\mathbb Z_{15}$, thereby proving that the ring $\mathbb Z_{15}$ is not a field. Answer.

Write an algorithm for computing the inverse element of an element of $\mathbb Z_p$. More precisely, write a method in IPython that takes two integers $a$ and $p$ as arguments and returns an integer — the inverse of $a$ in $\mathbb Z_p$. You may implement it simply by looking through all the elements of $\mathbb Z_p$.

Do the same as in the previous exercise, but instead of looking through all the elements of $\mathbb Z_p$, use the idea of the proof of the Theorem — find elements $u$ and $v$ such that $1 = ua+ vp$ using the extended Euclid's algorithm (recall how Bézout's identity $1 = ua+ vp$ was proven in the Divisibility and Modular Arithmetics topic). Compare the performance with the naive implementation that looks through all the elements of $\mathbb Z_p$.

Recall that according to Fermat's Little Theorem $a^p\equiv a\ (\operatorname{mod} p)$ for any prime number $p$ and any integer $a$. Based on this idea, create a third implementation of the inversion algorithm (in addition to the implementations in the previous two exercises). How many multiplications are needed to invert an element with this algorithm?