Saddle-Point (Laplace) Method in One Dimension

Refined asymptotics for integrals of exp(n f(x)) near a nondegenerate maximizer, giving Gaussian prefactors beyond the Laplace principle.
Saddle-Point (Laplace) Method in One Dimension

This lemma refines the by identifying the leading prefactor when the maximum is attained at a nondegenerate interior point.

Statement (one-dimensional nondegenerate maximum)

Let f:[a,b]Rf:[a,b]\to\mathbb{R} be twice continuously differentiable and suppose:

  • ff has a unique global maximum at an interior point x0(a,b)x_0\in(a,b),
  • f(x0)<0f''(x_0)<0 (nondegenerate maximum),
  • g:[a,b]Rg:[a,b]\to\mathbb{R} is continuous with g(x0)0g(x_0)\neq 0.

Define

In=abenf(x)g(x)dx. I_n = \int_a^b e^{n f(x)}\, g(x)\,dx.

Then, as nn\to\infty,

In=enf(x0)g(x0)2πnf(x0)(1+o(1)). I_n ={} e^{n f(x_0)}\, g(x_0)\, \sqrt{\frac{2\pi}{n\,|f''(x_0)|}}\, \big(1+o(1)\big).

Under higher smoothness assumptions on ff and gg, one can obtain a full asymptotic expansion in powers of 1/n1/n.

Key hypotheses and conclusions

Hypotheses

  • A unique interior maximizer x0x_0 for ff.
  • Nondegeneracy: f(x0)<0f''(x_0)<0.
  • Mild regularity of gg and ff near x0x_0.

Conclusions

  • The integral is asymptotically Gaussian around x0x_0 after Taylor expansion of ff: the leading exponential growth is enf(x0)e^{n f(x_0)}, and the subexponential prefactor is of order n1/2n^{-1/2}.
  • This yields a quantitative refinement of the Laplace principle: 1nlogInf(x0)\frac1n\log I_n \to f(x_0).

Proof idea / significance

Expand ff near x0x_0: f(x)=f(x0)+12f(x0)(xx0)2+r(x)f(x)=f(x_0)+\tfrac12 f''(x_0)(x-x_0)^2 + r(x), where r(x)=o((xx0)2)r(x)=o((x-x_0)^2). The main contribution to InI_n comes from a shrinking neighborhood of size O(n1/2)O(n^{-1/2}) around x0x_0, on which g(x)g(x0)g(x)\approx g(x_0) and r(x)r(x) is negligible compared to the quadratic term. After rescaling y=n(xx0)y=\sqrt{n}(x-x_0), the integral reduces to a Gaussian integral, giving the stated prefactor. Contributions away from x0x_0 are exponentially smaller because f(x)<f(x0)f(x)<f(x_0) there.

In statistical mechanics, saddle-point estimates justify mean-field and variational approximations of partition functions (for example, after introducing an order parameter or via Hubbard–Stratonovich transforms) and they often explain the appearance of Gaussian fluctuations around equilibrium points.