PaperHub
6.8
/10
Spotlight4 位审稿人
最低6最高7标准差0.4
6
7
7
7
3.3
置信度
正确性3.8
贡献度3.0
表达3.8
NeurIPS 2024

Approximating the Top Eigenvector in Random Order Streams

OpenReviewPDF
提交: 2024-05-13更新: 2024-11-06

摘要

When rows of an $n \times d$ matrix $A$ are given in a stream, we study algorithms for approximating the top eigenvector of $A^T A$ (equivalently, the top right singular vector of $A$). We consider worst case inputs $A$ but assume that the rows are presented to the streaming algorithm in a uniformly random order. We show that when the gap parameter $R = \sigma_1(A)^2/\sigma_2(A)^2 = \Omega(1)$, then there is a randomized algorithm that uses $O(h \cdot d \cdot \text{polylog}(d))$ bits of space and outputs a unit vector $v$ that has a correlation $1 - O(1/\sqrt{R})$ with the top eigenvector $v_1$. Here $h$ denotes the number of ``heavy rows'' in the matrix, defined as the rows with Euclidean norm at least $\|{A}\|_F/\sqrt{d \cdot \text{polylog}(d)}$. We also provide a lower bound showing that any algorithm using $O(hd/R)$ bits of space can obtain at most $1 - \Omega(1/R^2)$ correlation with the top eigenvector. Thus, parameterizing the space complexity in terms of the number of heavy rows is necessary for high accuracy solutions. Our results improve upon the $R = \Omega(\log n \cdot \log d)$ requirement in a recent work of Price. We note that Price's algorithm works for arbitrary order streams whereas our algorithm requires a stronger assumption that the rows are presented in a uniformly random order. We additionally show that the gap requirements in Price's analysis can be brought down to $R = \Omega(\log^2 d)$ for arbitrary order streams and $R = \Omega(\log d)$ for random order streams. The requirement of $R = \Omega(\log d)$ for random order streams is nearly tight for Price's analysis as we obtain a simple instance with $R = \Omega(\log d/\log\log d)$ for which Price's algorithm, with any fixed learning rate, cannot output a vector approximating the top eigenvector $v_1$.
关键词
Streaming AlgorithmsTop Singular VectorPCARandom Order Streams

评审与讨论

审稿意见
6

The paper studies the problem of finding the top eigenvector of a matrix in the streaming model. The problem is: given a sequence of d-dimensional rows a_1, a_2, ..., a_n of a matrix A, approximately compute the top eigenvector v_1 of A^TA. The algorithms gets a single pass over the stream of input rows, must use subquadratic (in d) space, and should output an approximation \hat v_1 to v_1.

Existing works on this problem have the following natural idea at their core: initialize a vector z\in R^d to be zero, and then, upon receiving a row a_i, set z\gets z+ \eta a_i^T z a_i for some learning rate \eta. One can either keep a fixed learning rate, or use a combination: the latter is done in Price's algorithm from a few years ago, which computes an approximation \hat v_1 that is at least 1-O(log d/R) correlated with v_1, where R is the eigenvalue gap \sigma_2(A^TA)/\sigma_1(A^TA). Price also showed that a 1-O(1/R^2) correlation requires almost quadratic in d space.

The author(s) result is that in random streams, the correlation bound can be improved to 1-O(1/\sqrt{R}). They complement their algorithmic result with a lower bound for Price's algorithm -- this is basically an adaptation of Price's lower bound to random order streams. They also provide certain impossibility results for fixed rate Oja's algorithm.

The main algorithmic technique is clean. The main point is that the fact that the stream is randomly ordered implies that its substreams contain good spectral approximations to the original matrix -- this is by standard spectral concentration bounds (after removing heavy entries). Due to the spectral gap assumption the top eigenvectors of these rescaled submatrices are close to v_1. Thus, one can run the power method on the substreams, of which there are quite a few. Thus, a good convergence can be obtained.

优点

S1. The authors provide a clean analysis of a very natural algorithm for top eigenvector computation in data streams. This is definitely worth publishing.

缺点

W1. I am not sure how novel the idea of using substreams that concentrate around A as steps in the power iteration is. If it is, then this is a very nice paper. Otherwise it can be seem more as a sharpening of Price's result.

W2. The author(s)' overview of related work on random streams is not up to date. From their list of citations if seems that there was no work done on random streams between 2009 and 2023. See, for example, https://arxiv.org/pdf/2110.10091 and references therein.

W3. I find it strange the author(s) use an unpublished result (Magdon-Ismail, 2010) from 14 years ago for the concentration bound that their analysis relies on. I recommend looking at Joel Tropp's survey on matrix concentration for a more formal reference.

问题

None.

局限性

None.

作者回复

We thank the reviewer for suggestions on citations for the random order model. We will add more references in the final version of the paper.

Using substreams for quadratic approximation

When each row of the stream is independently sampled from a distribution, Hardt and Price analyze a similar block power method. In that case, it is much easier to argue that the substreams concentrate around the covariance matrix. But, in our case, we assume that the rows of the matrix are fixed and the only randomness is in the order the rows are presented to the streaming algorithm. This makes the problem quite different to the one considered by Hardt and Price and hence the techniques used to achieve our proofs are different as well.

Using Magdon-Ismail’s result

We used the manuscript of Magdon-Ismail as the main reference since it directly has the result in the form we require. But we agree that citing Tropp’s work would be useful for the readers. We will include a citation in the final version of the paper.

审稿意见
7

The paper studies the problem of approximating the top eigenvector of ATAA^T A when rows of an n×dn \times d matrix AA are given in a stream. The authors consider worst-case inputs AA but assume the rows are presented in a uniformly random order. They show that when the gap parameter R=σ1(A)2/σ2(A)2=Ω(1)R = \sigma_1(A)^2 / \sigma_2(A)^2 = \Omega(1), there is a randomized algorithm using O(hdpolylog(d))O(h \cdot d \cdot \text{polylog}(d)) bits of space that outputs a unit vector vv with correlation 1O(1/R)1 - O(1/\sqrt{R}) with the top eigenvector v1v_1. Here, hh denotes the number of "heavy rows" in the matrix, defined as rows with Euclidean norm at least AF/dpolylog(d)|A|_F / \sqrt{d \cdot \text{polylog}(d)}. The authors provide a lower bound showing that any algorithm using O(hd/R)O(hd/R) bits of space can obtain at most 1Ω(1/R2)1 - \Omega(1/R^2) correlation with the top eigenvector. Their results improve upon the R=Ω(lognlogd)R = \Omega(\log n \cdot \log d) requirement in a recent work by Price. The authors' algorithm is based on a variant of the block power method. They also show improvements to the gap requirements in Price's analysis for both arbitrary order and random order streams. The techniques involve row sampling, concentration inequalities, and careful analysis of the power method with approximate quadratic forms.

优点

The paper introduces a novel algorithm to approximate the top eigenvector in random order streams, a significant advancement over previous methods that required multiple passes over the data or had more stringent conditions. One of the key strengths is its method for handling heavy rows in the matrix. By storing the rows with large Euclidean norms separately and processing the remaining rows, the algorithm ensures an accurate approximation of the top eigenvector​. The algorithm achieves near-optimal space complexity, using O(hd)O(h \cdot d) space where hh is the number of heavy rows, which is shown to be nearly tight. Moreover, The paper improves upon the gap requirements needed for the algorithm to work. It reduces the gap RR from Ω(lognlogd)\Omega(\log n \cdot \log d) to Ω(log2d)\Omega(\log^2 d) for arbitrary order streams and from Ω(logd)\Omega(\log d) to constant for random order streams. The use of row norm sampling as a general technique to reduce the number of rows while preserving the top eigenvector is a versatile approach.

缺点

The main weakness of this work is the lack of empirical validation. Given the simplicity of their algorithm and the ease of generating synthetic matrices with a given sequence of singular values, it should be easy to perform experiments to compare it with Price's approach and provide insight into its practical effectiveness.

问题

How does your algorithm compare with Price's algorithm in practice?

Typos

  • line 123, centralized equation, CR2CR^2 should be (CR2)(CR^2).
  • line 124, a/a2a/\|a\|_2 should be v/v2v/\|v\|_2.
  • line 144, td/Rtd/R should be hd/Rhd/R.
  • line 4 of Algorithm 1, nϵ2n\epsilon^2 should be (nϵ2)(n\epsilon^2).
  • line 10 of Algorithm 1, Aj(2np):j(2np)+yjA_{j\cdot (2np):j\cdot (2np)+y_j} should denote the rows in the range j(2np):j(2np)+yjj\cdot (2np):j\cdot (2np)+y_j, right? jj should be j1j-1, then.

局限性

Yes.

作者回复

We thank the reviewer for their comments.

Separation in Practice

In section 5 of the paper, we give a synthetically constructed instance for which Price’s algorithm does not give a good approximation to the top singular vector. For instances observed in practice, it seems harder to show such a separation. In many practical instances, the observations i.e., the rows are sampled from a nice enough distribution for which earlier analyses of Oja’s algorithm show very good approximability of the top eigenvector.

评论

I thank the authors for their rebuttal and have no further questions. As an aside (not affecting my score) I think the paper would benefit from including, even if only briefly and informally, a discussion of the empirical side, e.g. what you mention regarding the many practical scenarios in which the distribution is nice enough and Oja's algorithm yields a good approximation. This is related to reviewer yHif's concern that your paper could better motivate the problem.

评论

Thanks for the response. We will include our observations from experiments in the final version of the paper.

审稿意见
7

This paper is concerned with estimating a direction that is well-correlated with the top eigenvector v1v_1 of a matrix where the rows a1,,ana_1, \dots, a_n are streamed to us in uniformly random order. To make this well-defined, a gap assumption is required, i.e., that Rλ1(AA)/λ2(AA)1R \coloneqq \lambda_1(A^{\top}A)/\lambda_2(A^{\top}A) \gg 1. A successful algorithm is one that returns an estimate vector that is 1oR(1)1 - o_{R}(1) correlated with v1v_1 while using at most O~(d)\widetilde{O}(d) space. Note that if we are allowed to use d2d^2 space, then the problem is trivial, as we can simply maintain AAA^{\top}A. Thus, one of the main challenges is to figure out how to estimate v1v_1 when we are not allowed to remember too many of the rows we have seen so far.

The main result is an algorithm that uses O~(hd)\widetilde{O}(hd) space and outputs a unit vector uu such that u,v11C/R\langle u, v_1 \rangle \ge 1-C/\sqrt{R} for some universal constant CC, and where hh denotes the number of "heavy" rows (those that contribute at least a 1/d\sim 1/d fraction to the Frobenius norm of AA). This dependence on hh is essentially optimal, as the authors give a lower bound saying that any algorithm that outputs a vector with at least 11/R21-1/R^2 correlation must use hd/Rhd/R space.

The main technical contribution is to observe that if the rows of AA are randomly permuted and streamed to us, then in some sense, we can see logd\sim \log d spectral approximations to AA in disjoint contiguous windows of the stream. Thus, the power method can be simulated by adding aiai,ua_i\langle a_i, u \rangle to the estimate in each round. To actually form the spectral approximations and implement this with low space, the authors use a row sampling result which says that one only needs to choose and reweight ρ/ε2\sim \rho/\varepsilon^2 rows of AA to form a decent spectral approximation to AA, where ρ\rho denotes the stable rank of AA. So, the authors use the row sampling result to form logd\log d many spectral approximations to AA and then simulate the power method on these approximations. The final result follows from showing that the top eigenvector of the spectral approximations is reasonably well correlated with the top eigenvector of AA, which itself follows from standard eigenvector perturbation tools and the gap assumption.

Although this does not immediately work for heavy rows (e.g. consider a more difficult scenario where there is one row orthogonal to all the other rows in the matrix, it has large norm, and it witnesses v1v_1, then we must remember it), the simple fix is to just identify heavy rows on the fly and store those exactly. As mentioned earlier, this linear dependence on the number of heavy rows is necessary.

优点

The problem is a very natural one and the random order stream of a worst-case matrix is a nice assumption that allows one to get more optimistic results while not trivializing the question.

The algorithm is really simple and easy to interpret. The solution/analysis are very elegant. The writing is also clear. Finally, the idea and execution are pretty insightful.

缺点

In order to implement the row sampling result, we need an estimate on the operator norm of AA. The authors fix this by assuming that the coordinates of the inputs are between 1/poly(nd)1/poly(nd) and poly(nd)poly(nd) and then maintaining log(nd)\log(nd) copies of the algorithm in parallel. This is not too big of a deal, but I wish this assumption were made more explicit towards the beginning of the paper.

问题

None

局限性

Yes

审稿意见
7

Given a stream of nn data-points a1,a2,,anRda_1, a_2, \dots, a_n \in \mathbb{R}^d, this paper studies the problem of approximating the top eigen-vector of the matrix ATAA^T A at the end of the stream, where ARn×dA \in \mathbb{R}^{n \times d} is the data-matrix with rows of the matrix corresponding to the data-points. This is the same problem as approximating the first principal component in PCA. Prior work on this problem by Price (2023) gave an algorithm that works with some worst-case guarantees i.e. both the data-points and the order of the data-points can be chosen by the adversary. This paper is the first to study the setting where although the data-points can be chosen arbitrarily by the adversary, the order of arrival of the data-points is uniformly at random. For this setting, the paper provides an algorithm with improved guarantees over the algorithm by Price. Additionally, they also provide a lower bound for streaming algorithms for the random-order arrival model.

优点

Originality: First paper to study the problem of approximating the top eigen-vector in the random-order arrival setting.

Quality and Clarity: Overall well-written paper.

Significance: The paper can certainly do a better job of motivating the problem. This paper is primarily a theory paper but since the problem studied is closely related to PCA, which is obviously very important in practical applications, it is easy to imagine scenarios where the algorithm described in this paper can be implemented. For example, in the case of PCA for population genetics, where the dimensionality of the data (number of genetic markers) can exceed 10 million, and the number of samples in standard datasets like UK Biobank is of the order of hundreds of thousands, you could imagine running this algorithm in one-pass to approximate the top principal component, running it on a second pass to approximate the second principal component, and then using a third pass to map all the points using these two components (most PCA studies in population genetics papers just use 2 components as they are easy to visualize in papers). Even otherwise, as multiple papers on the closely-related "Streaming PCA" problem have appeared in past iterations of NeurIPS and other related conferences, this paper is clearly relevant to NeurIPS.

缺点

Paper needs to do a better job motivating the problem. This is a submission to NeurIPS, not COLT. Clearly, the paper has interesting technical contributions but I think this might be a major weakness for the paper. Some suggestions for the same:

It would be nice for the introduction of the paper to have at least a few lines motivating the case for high-dimensional d and why the distinction between d2d^2 and dd is something which matters in real-world applications. For example, see the paper “Memory Limited, Streaming PCA” by Mitliagkas, Caramanis, and Jain, which appeared in NeurIPS 2013: “In certain high-dimensional applications, where data points are high resolution photographs, biometrics, video, etc., pp often is of the order of 101010^{10} - 101210^{12}, making the need for O(p2)O(p^2) memory prohibitive. At many computing scales, manipulating vectors of length O(p)O(p) is possible, when storage of O(p2)O(p^2) is not.”

A great reference to add on line 80 would be the book chapter by Gupta and Singla titled “Random-Order Models” which appeared in the book “Beyond Worst-Case Analysis” edited by Tim Roughgarded.

问题

How do the guarantees in this paper extend to the setting where you need to approximate not just the top eigen-vector, but the top kk principal components, for some arbitrary value of kk? I understand that this might not be straight-forward to answer, but at the very least, it would be nice to have a conclusion or future directions section at the end of the paper, with a brief discussion on this.

Minor editing comment: For the paper “Syamantak Kumar and Purnamrita Sarkar. Streaming PCA for Markovian data.”, please use the Bibtex citation from the official NeurIPS website instead of Google Scholar. The current citation reads as NeurIPS 2024, but it actually appeared in NeurIPS 2023..

局限性

Primarily a theory paper.

作者回复

Thanks for the reference suggestions. We will add them to the introduction in the final version.

How do the guarantees in this paper extend to the setting where you need to approximate not just the top eigen-vector, but the top kk principal components, for some arbitrary value of kk?

In practice, one can do kk passes over the data and by projecting away the vectors computed in the previous rounds, we can obtain approximations to the top kk singular vectors. But we do not have any guarantees on how good such approximations are. The main reason is that the approximation guarantee i.e., v^,v1211/R\langle\hat{v}, v_1\rangle^2 \ge 1 - 1/\sqrt{R} that we obtain in the first pass is not sufficient to ensure that the gap between the top 2 singular values of the “residual stream” i.e., of the vectors (Iv^v^T)a1,,(Iv^v^T)an(I-\hat{v}\hat{v}^T)a_1, \ldots, (I-\hat{v}\hat{v}^T)a_n is large enough for the algorithm to work.

It is an interesting open question to extend our algorithm or obtain an algorithm using a different approach for k-SVD for the worst case input matrices AA presented to the algorithm in a uniform random order. We note that the work of Price also only applies to the top eigenvector and the same open question applies there as well.

最终决定

This paper studies the problem of approximating the top eigenvector of the matrix in the random order data stream model. Both upper and lower bounds are provided.

All reviewers agreed that the problem studied is natural, and that the random order stream is a reasonable model for obtaining more optimistic results. It is also very nice that the space complexity of the proposed algorithm is nearly optimal.