Numerical Methods

ODEs, PDEs, matrix diagonalization, and time evolution — a practical toolkit that powers modern physics and computation, from planetary motion to quantum dynamics.

Discretization and grids illustration
Discretization turns continuous equations into computable algebra: grids, timesteps, and iterative solvers reveal dynamics that paper cannot.

Introduction

Numerical methods are the bridge between elegant theory and working predictions. Many equations of physics are analytically intractable; others admit closed forms but only under idealized assumptions. Computation resolves this tension by replacing calculus with carefully controlled approximations. We discretize time into steps, space into grids, and functions into vectors. We linearize nonlinearities, iterate toward fixed points, and track error at each stage. Far from being a mere workaround, numerical analysis is a language of its own with grammar (stability), semantics (consistency), and style (efficiency). Mastering it means being able to turn models into numbers you can test, visualize, and trust.

This page blends physics and computation. We will treat differential equations that describe motion and fields, linear algebra routines that diagonalize Hamiltonians, and integrators that advance states through time. Along the way we emphasize three guiding questions: How do we discretize? How does error propagate? and When is an algorithm stable? These questions matter whether you are simulating a damped oscillator, integrating Schrödinger’s equation, or marching Maxwell’s equations across a Yee grid.

From Equations to Algorithms

The first move in computation is discretization. For an ODE \(\dot y=f(t,y)\), we replace the derivative by a finite difference. The simplest forward Euler step, \[ y_{n+1}=y_n+\Delta t\, f(t_n,y_n), \] is explicit and first order: the truncation error scales as \(\mathcal{O}(\Delta t)\). Higher-order Runge–Kutta methods (like RK4) evaluate \(f\) at staged points to cancel error terms and achieve \(\mathcal{O}(\Delta t^4)\) accuracy. Yet, order is not destiny—stability controls whether errors blow up. For stiff systems (rapidly decaying modes mixed with slow ones), implicit schemes such as backward Euler or the trapezoidal rule are preferred because their stability regions cover the negative real axis. Choosing a method is therefore a physics decision disguised as a numerical one: what are the timescales, and which modes must be trusted?

For PDEs we extend the idea to space. Finite differences approximate derivatives via stencils (e.g., \(u_{xx}(x_i)\approx\frac{u_{i-1}-2u_i+u_{i+1}}{\Delta x^2}\)), finite elements expand fields in basis functions with weak formulations, and finite volumes enforce conservation laws by balancing fluxes across cell faces. The celebrated CFL condition ties the time step to the wave speed and grid spacing: information must not outrun the grid. Violating CFL does not merely introduce error; it changes the physics you are modeling.

Linear algebra is the engine that drives everything. Sparse matrices arise from discretizations; iterative Krylov methods (CG, BiCGSTAB, GMRES) solve the resulting systems; preconditioners rebalance spectra to accelerate convergence. Eigenvalue routines (power method, Lanczos, Arnoldi) expose normal modes, band structures, and stationary states. In quantum mechanics these tools diagonalize Hamiltonians \(H\) to obtain energies and eigenstates, or solve the linear system created by implicit time-stepping.

Stability region plot
Stability regions define which eigenvalues of the Jacobian may be integrated safely. Implicit methods trade cost for robustness on stiff problems.

Core Techniques by Example

1) ODE Integrators. Consider the damped, driven oscillator \(m\ddot x+\gamma\dot x+kx=F_0\cos\omega t\). We rewrite as a first-order system for \((x,v)\) and integrate with RK4 when the forcing is smooth. If \(\gamma\) is large (stiff regime), an implicit method avoids the tiny \(\Delta t\) that explicit schemes would demand. We diagnose stiffness by the ratio of eigenvalues of the Jacobian and select adaptive step controllers (PI or PID) to keep local error near a tolerance.

2) PDEs: Diffusion and Waves. The heat equation \(u_t=\alpha u_{xx}\) with central differences gives \(u^{n+1}_i=u^n_i+\lambda(u^n_{i-1}-2u^n_i+u^n_{i+1})\) where \(\lambda=\alpha\Delta t/\Delta x^2\). Stability requires \(\lambda\le \tfrac12\). For the wave equation \(u_{tt}=c^2 u_{xx}\), leapfrog schemes advance \(u\) on staggered time levels with a CFL limit \(c\Delta t/\Delta x\le 1\). Finite elements handle irregular geometries by assembling local stiffness and mass matrices from basis functions; mass lumping yields efficient explicit updates, while implicit Newmark schemes provide unconditional stability for structural dynamics.

3) Linear Systems and Eigenproblems. Discretizing Poisson’s equation \(\nabla^2\phi=\rho\) produces large sparse systems \(A\phi=b\). Multigrid methods attack error scales recursively—smoothing high-frequency error on fine grids and correcting low-frequency components on coarse ones—achieving near-linear complexity. For eigenproblems \(A x=\lambda x\), the power method isolates the dominant mode; shift-and-invert focuses on interior spectra; Lanczos and Arnoldi generate Krylov subspaces to approximate many eigenpairs efficiently. In quantum models, these tools expose bound states, band diagrams, and resonance widths.

4) Time Evolution of Quantum States. The Schrödinger equation \(i\hbar\partial_t\psi=H\psi\) must conserve norm and phase. Naïve explicit updates drift. Structure-preserving schemes—Crank–Nicolson (unitary to second order), split-operator Fourier methods \(e^{-iT\Delta t/\hbar}e^{-iV\Delta t/\hbar}\), and Krylov exponentials \(\psi_{n+1}\approx \exp(-iH\Delta t/\hbar)\psi_n\)—honor the physics. With sparse \(H\), Lanczos time propagation builds a small subspace where the exponential is accurate; with periodic boundary conditions, FFTs diagonalize the kinetic operator and make split-step evolution \(\mathcal{O}(N\log N)\).

5) Error, Convergence, and Validation. Numerical work is experimental. We perform grid-refinement studies (halve \(\Delta x\), \(\Delta t\); measure how error scales), monitor conserved quantities (energy, norm, charge), and compare against known limits (small-amplitude waves, free-particle propagation, analytic Green’s functions). Richardson extrapolation cancels leading error terms to boost accuracy from second to third order; posteriori estimators choose where to refine adaptively. The goal is not a pretty plot but a quantified uncertainty budget.

Split-operator propagation diagram
Split-operator methods alternate between position and momentum space, exploiting FFTs to evolve quantum states efficiently while maintaining norm.

Applications in Practice

Quantum Lattices. Tight-binding models on 1D and 2D lattices produce sparse Hamiltonians with band structures that reveal transport properties. By sweeping boundary twists (Bloch phases) one maps dispersion \(E(k)\). Disorder introduces Anderson localization; finite-size scaling of participation ratios diagnoses the transition. Time-dependent fields require Peierls substitution and gauge-consistent integrators to respect symmetries.

Molecular Vibrations. Discretizing the nuclear Schrödinger equation along a bond coordinate yields bound-state spectra via finite differences plus a symmetric tridiagonal eigensolve. Anharmonicity appears as level spacing deviations; coupling multiple coordinates invites sparse tensor formats (MPS/TT) to tame dimensionality.

Electromagnetics. The Yee finite-difference time-domain scheme staggers \(E\) and \(H\) in space and time, enforcing discrete Faraday and Ampère laws exactly. Courant factors dictate \(\Delta t\). Perfectly matched layers absorb outgoing waves; subcell models capture thin conductors without destroying stability.

Astrophysical Orbits. Symplectic integrators preserve phase-space volume and long-term invariants for Hamiltonian systems. In \(N\)-body gravity, leapfrog and higher-order splitting schemes avoid secular drift in energy, allowing million-orbit integrations that would be impossible with generic solvers of the same step size.

Uncertainty Quantification. Monte Carlo and quasi–Monte Carlo estimate expectations when dynamics or geometry are rough. Polynomial chaos expansions propagate uncertainty through deterministic solvers by representing input randomness in orthogonal polynomial bases tied to the distribution family (Hermite for Gaussian, Legendre for uniform), turning stochastic PDEs into coupled deterministic systems.

Quick Quiz – Numerical Methods

1) Why are implicit integrators preferred for stiff ODEs?

2) What does the CFL condition control in PDE solvers?

3) Which property should time propagation of Schrödinger’s equation preserve?