A polynomially accelerated fixed-point iteration for vector problems

Francesco Alemanno

Published 2025 in e-Journal of Analysis and Applied Mathematics

ABSTRACT

Fixed-point solvers are ubiquitous in nonlinear PDEs, yet their progress collapses whenever the Jacobian at the solution carries an eigenvalue arbitrarily close to one. We ask whether such stagnation can be removed without storing long histories or solving dense least squares. Under two assumptions---(A1) the linearised error $e_n$ is dominated by a multiplier $m$ with $|m|<1$ and (A2) residuals shrink monotonically---we construct a quadratic blend of three iterates whose error polynomial has a double root at $m$. This three-point polynomial accelerator (TPA) cancels the stubborn mode up to $o(\|e_n\|)$, reduces to Aitken's $\Delta^2$ process in one dimension, and matches a doubly blended Anderson step with depth $m=2$ when the regularisation vanishes, yet it keeps the Picard memory footprint. The only extra ingredient is a residual-based estimate of $w=(1-m)^{-1}$ obtained from a closed-form regularised least-squares fit that remains stable even when two residuals nearly coincide. Numerical experiments on linear systems with clustered spectra, a $320$-dimensional nonlinear $\tanh$ fixed point, and a $50\times 50$ Poisson discretisation show that TPA reaches the $10^{-8}$ residual tolerance in $32$, $36$, and $244$ map evaluations (respectively). In the same settings SOR requires $663$ steps and Anderson acceleration with depth $m=5$ consumes $52$, $38$, and $955$ evaluations. TPA therefore supplies a parameter-free, constant-memory drop-in accelerator whenever a single contraction factor throttles convergence.

PUBLICATION RECORD

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

CITED BY

  • No citing papers are available for this paper.

Showing 0-0 of 0 citing papers · Page 1 of 1