Markov chain (discrete time)

Stochastic process with the Markov property in discrete time; specified by a transition kernel/matrix and used to model relaxation and sampling.
Markov chain (discrete time)

A discrete-time Markov chain is a basic model for stochastic dynamics and is frequently used for equilibrium sampling (e.g. MCMC) and for coarse-grained dynamics in statistical mechanics.

Prerequisites: , .

Definition (Markov property)

Let (Xn)n0(X_n)_{n\ge 0} be a process on a state space SS. It is a Markov chain if for all nn and measurable ASA\subseteq S,

P(Xn+1AX0,,Xn)=P(Xn+1AXn). \mathbb{P}(X_{n+1}\in A \mid X_0,\dots,X_n)=\mathbb{P}(X_{n+1}\in A \mid X_n).

The conditional law is determined by a transition kernel P(x,A)P(x,A) (or by a matrix PijP_{ij} when SS is countable).

Transition operator and nn-step transitions

For bounded functions ff on SS define

(Pf)(x)=f(y)P(x,dy), (Pf)(x)=\int f(y)\,P(x,dy),

and define nn-step kernels by composition: Pn+m=PnPmP^{n+m}=P^n P^m (Chapman–Kolmogorov).

If μ0\mu_0 is the initial distribution, then the distribution after nn steps is

μn=μ0Pn. \mu_n = \mu_0 P^n.

Stationary distribution

A probability measure π\pi on SS is stationary if

π=πP,i.e.π(A)=π(dx)P(x,A). \pi = \pi P, \quad\text{i.e.}\quad \pi(A)=\int \pi(dx)\,P(x,A).

In statistical mechanics, π\pi is often chosen to be an equilibrium distribution such as a measure.

Detailed balance and reversibility

If π\pi is stationary, the chain is reversible with respect to π\pi when it satisfies :

π(dx)P(x,dy)=π(dy)P(y,dx) \pi(dx)\,P(x,dy)=\pi(dy)\,P(y,dx)

(in the discrete state case: πiPij=πjPji\pi_i P_{ij}=\pi_j P_{ji}).

Expectations along the chain

For an observable ff and initial law μ0\mu_0,

Eμ0[f(Xn)]=μ0(dx)(Pnf)(x). \mathbb{E}_{\mu_0}[f(X_n)] = \int \mu_0(dx)\,(P^n f)(x).

Time averages and convergence to equilibrium depend on additional properties (irreducibility, aperiodicity, recurrence), but the operator viewpoint above is the main structural tool.