Tensor Networks

Computational compression for quantum many-body systems.

Diagram of MPS/PEPS tensor network graphs
Tensor networks trade Hilbert space size for entanglement structure: a graph of small tensors represents states too large to write explicitly.

Introduction

A quantum state of \(N\) qubits requires \(2^N\) complex amplitudes — a number that explodes long before \(N\) reaches laboratory scales. Tensor networks circumvent this barrier by representing the state as a network of low-rank tensors whose contracted indices encode entanglement. Instead of storing amplitudes explicitly, we store structure. When the underlying physics has limited entanglement — as in gapped 1D ground states — the network remains compact and computations become feasible on classical hardware.

This page blends physics intuition (area laws, correlation lengths, entanglement spectra) with computational technique (SVD truncation, optimal contraction, canonical forms). We start with Matrix Product States for 1D systems, extend to higher-dimensional PEPS and MERA, and close with algorithms such as DMRG and TEBD that turn the representation into a powerful simulator. Along the way we draw gentle parallels to machine learning: contraction resembles message passing; truncation mirrors low-rank regularization; network depth sculpts inductive bias.

Why Tensor Networks Work

Ground states of local gapped Hamiltonians obey an area law for entanglement entropy: the entropy of a region scales with its boundary, not its volume. In 1D this implies that the Schmidt rank across any cut is bounded by a constant, so a state can be faithfully approximated by an MPS with modest bond dimension \(D\). Physically, finite correlation length limits how much information flows across a cut; computationally, this limits the rank of the bipartitioned wavefunction, enabling SVD truncation without catastrophic error.

In 2D and beyond, exact area laws are less forgiving but still suggest efficient approximations for many phases. PEPS captures local entanglement by placing a tensor on each site and connecting neighbors by virtual bonds. Contraction becomes \#P-hard in general, but approximate schemes (boundary MPS, corner transfer matrices, tensor renormalization) control the cost. MERA uses isometries and disentanglers to represent critical states with algebraic correlations while preserving causal structure that makes certain contractions efficient.

MPS canonical form and Schmidt values
Bringing an MPS to canonical form makes the Schmidt values explicit at each bond, turning normalization, overlaps, and truncation into local operations.

Matrix Product States (MPS)

An MPS factorizes a length-\(N\) wavefunction into site tensors \(A^{[k]}_{s_k}\) with physical index \(s_k\in\{0,1\}\) (for qubits) and virtual indices of dimension \(D\). Contracting neighbors along virtual bonds reconstructs amplitudes. Canonical forms — left-, right-, and mixed-canonical — expose the Schmidt coefficients \(\{\lambda_\alpha\}\) at each cut, letting us compute entanglement entropies \(S=-\sum_\alpha \lambda_\alpha^2 \log \lambda_\alpha^2\) and perform optimal truncations by discarding small \(\lambda_\alpha\).

DMRG. The Density-Matrix Renormalization Group variationally minimizes the energy over MPS of fixed bond dimension. Sweeping left-to-right and back, we solve a sequence of effective eigenvalue problems to update site tensors, truncating by SVD at each bond. DMRG has become the gold standard for 1D ground states and low-lying spectra, handling thousands of sites with high precision when \(D\) is modest.

TEBD. Time-Evolving Block Decimation applies a Trotter–Suzuki decomposition to \(e^{-iHt}\) (or imaginary time for ground-state projection) and updates the MPS by local two-site gates followed by SVD truncation. As entanglement grows linearly after a quench, the required \(D\) grows, limiting reachable times — a physics limitation, not a software bug.

PEPS, MERA, and Higher Dimensions

PEPS. Each lattice site carries a rank-5 tensor (four virtual bonds plus a physical index). Expectation values reduce to contracting a double-layer network — expensive but manageable with boundary MPS or corner transfer matrix (CTM) methods that iteratively approximate the environment. PEPS captures 2D area-law states and topological order, including toric code and RVB states.

MERA. The Multiscale Entanglement Renormalization Ansatz stacks layers of disentanglers and isometries in a causal cone. Critical systems with scale invariance become compact: correlation functions and scaling dimensions emerge from transfer operators, linking numerics to conformal data.

Contraction and Cost. Optimal contraction order determines feasibility. Treewidth controls complexity; heuristic planners (e.g., greedy, hypergraph partitioning) can cut costs by orders of magnitude. In practice, we balance bond dimension, environment accuracy, and truncation error to hit a target fidelity under a runtime budget.

PEPS with corner transfer matrix environment
Corner transfer matrices summarize the infinite environment, enabling expectation values and updates without full 2D contraction.

Applications and Parallels

Quantum Simulation. Compute ground states, excitations, and dynamics in spin chains, fermionic models (via Jordan–Wigner or Bravyi–Kitaev), and bosonic lattices. Extract correlation lengths, order parameters, and entanglement spectra to map phase diagrams. Gauge-constrained models benefit from symmetry-adapted tensors that enforce conservation at the level of indices.

Compression & ML Analogies. Low-rank truncation is the SVD cousin of PCA; tensor networks implement factored representations analogous to deep linear networks. Tensor trains (another name for MPS) compress large multidimensional datasets; MPOs act like structured linear layers. Contraction equals message passing on a graph; canonicalization resembles layer normalization for amplitudes.

Hybrid Quantum–Classical. Tensor networks simulate shallow quantum circuits by contracting along light cones or cutting wires and sampling boundary states. This provides baselines for quantum advantage experiments and tools for error mitigation by projecting noisy states onto low-entanglement manifolds.

Quick Quiz – Tensor Networks

1) What physical principle justifies MPS efficiency for many 1D ground states?

2) In DMRG, why is the SVD used at each bond?

3) What makes PEPS contraction challenging compared to MPS?