Shannon's groundbreaking work in which Information Theory was
born is generally considered one of the greatest scientific achievements
of the 20th century. This theory comes hand in hand with the Theory
of Communication and the problems one faces in attempting to communicate
reliably over noisy communication channels. The study of such problems
has evolved into
the field of ErrorCorrecting Codes . In the most basic scenario one
transmits nbit long messages. However, in order to deal
with noise that may arise during transmission, some efficiency must be
given up. From among all 2^{n} possible
nbit strings we consider only a subset C ⊆ {0,1}^{n} ("a
code book") and restrict all our messages to nbit strings from C.
In this theory a subset C ⊆ {0,1}^{n} is called a code
(or a binary code when we want to stress that our alphabet is the set
{0,1}^{n}.) Consider a message y ∈ {0,1}^{n} that reaches the receiver.
The simplest situation is when y ∈ C, in which case we assume that the
message y is indeed what the transmitter had sent. If, however,
y ∉ C, we assume that y is a noisy version of some x ∈ C.
We pick that x ∈ C that is the most likely original message, namely,
the one that differs from y in the smallest possible number of bits,
i.e., that x ∈ C which minimizes d_{H}(x,y).
Here d_{H}(u,v) is
the Hamming distance between the two strings u, v ∈ {0,1}^{n},
namely, the number of coordinates in which they differ
d_{H}(u,v)={i  u_{i} ≠ v_{i}}.
It follows that if every two distinct members
u, v ∈ C are at distance d_{H}(u,v) ≥ r, then
our conclusion is correct whenever no more than ⌊r/2⌋
errors occur. In this case we say that the code C ⊆ {0,1}^{n}
is ⌊r/2⌋errorcorrecting .
Thus we understand that a major issue in the theory is to find codes
which meet two inherently conflicting goals

The cardinality C is large  and so we make good use of the
communication channel. This objective is usually quantified in
terms of C's rate , namely R(C)= 1/n log_{2} C.

All pairwise distances d_{H}(x,y) for x ≠ y ∈ C are large 
and so the code can correct many errors. The main parameter is
C's distance which is d(C)=min d_{H}(x,y) over all
x ≠ y ∈ C. Often we refer to a normalized version
δ(C) = d(C)/n.
Understanding this tradeoffs is a mystery that has baffled
mathematicians for many years. This fundamental
problem is one of the main focal points of this school.
A main specific challenge is to understand how large R(C)
can be if
C ⊂ {0,1}^{n} satisfies δ(C) ≥ x
for some 1 ≥ x ≥ 0 and n is large.
This maximum is usually denoted by R(x).