On Efficient Algorithms for Computing Near-Best Polynomial Approximations to High-Dimensional, Hilbert-Valued Functions from Limited Samples von Ben Adcock | ISBN 9783985470709

On Efficient Algorithms for Computing Near-Best Polynomial Approximations to High-Dimensional, Hilbert-Valued Functions from Limited Samples

von Ben Adcock, Simone Brugiapaglia, Nick Dexter und Sebastian Moraga
Mitwirkende
Autor / AutorinBen Adcock
Autor / AutorinSimone Brugiapaglia
Autor / AutorinNick Dexter
Autor / AutorinSebastian Moraga
Buchcover On Efficient Algorithms for Computing Near-Best Polynomial Approximations to High-Dimensional, Hilbert-Valued Functions from Limited Samples | Ben Adcock | EAN 9783985470709 | ISBN 3-98547-070-7 | ISBN 978-3-98547-070-9
Inhaltsverzeichnis 1

On Efficient Algorithms for Computing Near-Best Polynomial Approximations to High-Dimensional, Hilbert-Valued Functions from Limited Samples

von Ben Adcock, Simone Brugiapaglia, Nick Dexter und Sebastian Moraga
Mitwirkende
Autor / AutorinBen Adcock
Autor / AutorinSimone Brugiapaglia
Autor / AutorinNick Dexter
Autor / AutorinSebastian Moraga

Sparse polynomial approximation is an important tool for approximating high-dimensional functions from limited samples – a task commonly arising in computational science and engineering. Yet, it lacks a complete theory. There is a well-developed theory of best s-term polynomial approximation, which asserts exponential or algebraic rates of convergence for holomorphic functions. There are also increasingly mature methods such as (weighted) ℓ^1-minimization for practically computing such approximations. However, whether these methods achieve the rates of the best s-term approximation is not fully understood. Moreover, these methods are not algorithms per se, since they involve exact minimizers of nonlinear optimization problems. This paper closes these gaps by affirmatively answering the following question: are there robust, efficient algorithms for computing sparse polynomial approximations to finite- or infinite-dimensional, holomorphic and Hilbert-valued functions from limited samples that achieve the same rates as the best s-term approximation? We do so by introducing algorithms with exponential or algebraic convergence rates that are also robust to sampling, algorithmic and physical discretization errors. Our results involve several developments of existing techniques, including a new restarted primal-dual iteration for solving weighted ℓ^1-minimization problems in Hilbert spaces. Our theory is supplemented by numerical experiments demonstrating the efficacy of these algorithms.