Skip to content

Quadrature Methods

A crucial part of transport-map applications is selecting a suitable quadrature method. Quadrature is used in map optimization, in particular when evaluating the Kullback–Leibler divergence

DKL(T#ρπ)=[logρ(z)logπ(T(a,z))log|detT(a,z)|]ρ(z) dz

Here, z denotes the variable in the reference space with density ρ(z), π(x) is the target density, and T(a,z) is the transport map parameterized by a.

In practice we approximate the integral by a quadrature sum:

i=1Nwq,i[logπ(T(a,zq,i))log|detT(a,zq,i)|]

The quadrature points zq,i and weights wq,i must be chosen so the sum approximates expectations with respect to the reference measure ρ(z), which is typically the standard Gaussian.

Especially in Bayesian inference, where evaluating the target density π(x) can be expensive, using efficient quadrature methods is important to reduce the number of target evaluations [4].

Monte Carlo

Monte Carlo estimates an expectation E[f(Z)] by sampling ZiN(0,I) and forming the sample average. It is simple and robust; weights are uniform. Convergence is stochastic (O(n1/2)) but independent of dimension.

We start by loading the necessary packages:

julia
using TransportMaps
using Plots

mc = MonteCarloWeights(500, 2)
p_mc = scatter(mc.points[:, 1], mc.points[:, 2], ms=3,
    label="MC samples", title="Monte Carlo (500 pts)", aspect_ratio=1)

Latin Hypercube Sampling (LHS)

Latin Hypercube is a stratified sampling design that improves space-filling over plain Monte Carlo; weights remain uniform. It often reduces variance for low to moderate dimensions.

julia
lhs = LatinHypercubeWeights(500, 2)
p_lhs = scatter(lhs.points[:, 1], lhs.points[:, 2], ms=3,
    label="LHS samples", title="Latin Hypercube (500 pts)", aspect_ratio=1)

Tensor-product Gauss–Hermite

Gauss–Hermite quadrature provides nodes zq,i and weights wq,i such that for smooth integrands the integral with respect to the Gaussian density is approximated very accurately. Tensor-product rules take the 1D rule in each coordinate and form all combinations, producing N=nd nodes for n points per dimension.

julia
hermite = GaussHermiteWeights(5, 2)
p_hermite = scatter(hermite.points[:, 1], hermite.points[:, 2],
    ms=4, label="Gauss–Hermite", title="Tensor Gauss–Hermite (5 × 5)", aspect_ratio=1)

Sparse Smolyak Gauss–Hermite

Sparse Smolyak grids combine 1D quadrature rules across dimensions with carefully chosen coefficients to cancel redundant high-order interactions. The result is substantially fewer nodes than the full tensor product while retaining high accuracy for mixed smooth functions. Note that Smolyak quadrature can produce negative weights; this is expected for higher-order correction terms.

julia
sparse = SparseSmolyakWeights(2, 2)
p_sparse = scatter(sparse.points[:, 1], sparse.points[:, 2], ms=6,
    label="Smolyak", title="Sparse Smolyak (level 2)", aspect_ratio=1)

Quick usage notes

  • Use Monte Carlo or LHS for high-dimensional problems where deterministic quadrature is infeasible.

  • Use tensor Gauss–Hermite in very low dimensions when the integrand is smooth and high accuracy is required.

  • Use Sparse Smolyak for intermediate dimensions (e.g. d6) to get higher accuracy at much lower cost than the full tensor rule.


This page was generated using Literate.jl.