OSNAP: Faster Numerical Linear Algebra Algorithms via Sparser Subspace Embeddings

Jelani Nelson,Huy L. Nguyen

Published 2012 in IEEE Annual Symposium on Foundations of Computer Science

ABSTRACT

An oblivious subspace embedding (OSE) given some parameters ε, d is a distribution D over matrices Π ∈ R<sup>m×n</sup> such that for any linear subspace W ⊆ R<sup>n</sup> with dim(W) = d, P<sub>Π~D</sub>(∀x ∈ W ||Πx||<sub>2</sub> ∈ (1 ± ε)||x||<sub>2</sub>) > 2/3. We show that a certain class of distributions, Oblivious Sparse Norm-Approximating Projections (OSNAPs), provides OSE's with m = O(d<sup>1+γ</sup>/ε<sup>2</sup>), and where every matrix Π in the support of the OSE has only s = O<sub>γ</sub>(1/ε) non-zero entries per column, for γ > 0 any desired constant. Plugging OSNAPs into known algorithms for approximate least squares regression, ℓ<sub>p</sub> regression, low rank approximation, and approximating leverage scores implies faster algorithms for all these problems. Our main result is essentially a Bai-Yin type theorem in random matrix theory and is likely to be of independent interest: we show that for any fixed U ∈ R<sup>n×d</sup> with orthonormal columns and random sparse Π, all singular values of ΠU lie in [1 - ε, 1 + ε] with good probability. This can be seen as a generalization of the sparse Johnson-Lindenstrauss lemma, which was concerned with d = 1. Our methods also recover a slightly sharper version of a main result of [Clarkson-Woodruff, STOC 2013], with a much simpler proof. That is, we show that OSNAPs give an OSE with m = O(d<sup>2</sup>/ε<sup>2</sup>), s = 1.

PUBLICATION RECORD

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-54 of 54 references · Page 1 of 1

CITED BY

Showing 1-100 of 407 citing papers · Page 1 of 5