The method of types is a set of counting and probability tools for sequences over a finite alphabet. It is foundational in information theory and large deviations: it turns questions about probabilities of empirical frequencies into entropy and divergence calculations.
It relies on the entropy–counting relationship in entropy-multinomial-coefficients
.
Types and type classes
Let A be a finite alphabet with ∣A∣=m, and let xn=(x1,…,xn)∈An.
The type (empirical distribution) of xn is the probability mass function Pxn on A defined by
Pxn(a)=n1#{i:xi=a}.For a type P with denominator n (i.e., all nP(a) are integers), the type class is
T(P)={xn∈An:Pxn=P}.
Counting: size of a type class
Write na=nP(a) for each a∈A. Then
∣T(P)∣=∏a∈Ana!n!,a multinomial coefficient.
Let
H(P)=−a∈A∑P(a)logP(a)(natural-log entropy). Then the type class size satisfies the standard bounds
(n+1)−menH(P)≤∣T(P)∣≤enH(P).In particular, log∣T(P)∣=nH(P)+O(logn).
Counting: number of possible types
The number of distinct types on A with denominator n is at most
(n+1)m,because each P(a) must be in {0,1/n,2/n,…,1}.
Probability of a type class under an i.i.d. law
Let Q be a distribution on A, and let Xn∼Q⊗n be i.i.d. draws.
For any fixed type P with denominator n, every sequence xn∈T(P) has the same probability:
Q⊗n(xn)=a∈A∏Q(a)nP(a).Therefore,
Q⊗n(T(P))=∣T(P)∣a∈A∏Q(a)nP(a).Introduce the (natural-log) Kullback–Leibler divergence
D(P∥Q)=a∈A∑P(a)logQ(a)P(a).Using ∣T(P)∣≈enH(P) gives the characteristic exponential rate
Q⊗n(T(P))≈e−nD(P∥Q)(up to polynomial factors). A standard bound pair is
(n+1)−me−nD(P∥Q)≤Q⊗n(T(P))≤e−nD(P∥Q).Why this is useful
- Converts empirical-frequency events into entropy/divergence exponents.
- Provides finite-n (non-asymptotic) bounds with only polynomial slack.
- Serves as a discrete, combinatorial route to large deviation principles (e.g., the intuition behind Sanov-type results).