Sample-Efficient Private Learning of Mixtures of Gaussians
We provide improved sample complexity bounds for learning mixtures of Gaussians with differential privacy, which are optimal for sufficiently large dimension or for one-dimensional Gaussian Mixtures..
摘要
评审与讨论
The paper studies the important problem of private learning of the Gaussian mixture model to estimate the underlying distribution within a desired total variation distance. By combining different techniques, the authors succeed in deriving bounds that are of quadratic dimension, thus significantly improving the existing results.
优点
The paper contains several new results that significantly improve the state of the art. These include i) Theorem 1.3, which establishes a bound with quadratic complexity for any dimension, ii) Theorem 1.4, which proposes an improved bound for , showing that the sample complexity can be linear in the number of components, iii) Theorem 1.5, which proposes a lower bound on the sample complexity. The latter, combined with Theorem 1.4, shows the optimality of the established bounds for the univariate Gaussian distributions. In addition, the paper is generally well and smoothly written.
缺点
-
The paper could benefit from some numerical verifications of the results/algorithms used.
-
The appendix section is poorly organized. Without an outline and proof, it is difficult to follow such a long appendix.
问题
-
It would be better to explain more the difference between density estimation and parameter estimation.
-
How do the bounds of Theorems 1.3, 1.4., and 1.5 behave with respect to the failure probability order-wise?
-
Can the authors discuss the assumption (line 160) a bit more? Why does it depend on (and not or ?)? Also, is the polynomial dependence restrictive?
-
The first sentence of the informal statement of Theorem 2.1. does not seem rigorous enough. Is such a unique? I don't think so. Then, if not, which choice of is taken into account when computing ? Probably maximum volume?
-
The appendix sections are very difficult to follow. There should be a detailed outline at the beginning to guide the reader as to what each appendix deals with. Also, there should be sufficient references in the main text. For example, after the informal statement of Theorem 2.1., it should be clearly stated where the full statement and proof can be found. Similarly, after each paragraph (e.g., Section 2.1) or after each mentioned previously established result (e.g., advanced composition theorem), there should be an exact pointer to where the complete version can be found in the appendices.
-
Since the submission was allowed for 9 pages, the authors had 1.5 more pages. I think some material from the appendices, for example the algorithms used to privately learn GMM, could be moved to the main text.
-
Finally, what is missing the most is the experimental section. If I'm not mistaken, this should be easy to do. Although not all theoretical work requires experimental verification, I believe that if the experiments verify the theoretical finding, this would greatly increase the importance of the results.
Minor comments:
-
Line 52: it is better to add "The parameters and ...".
-
The constant in Theorem 1.5. does not appear in the bound.
-
In the first paragraph of Section 2.1, it is mentioned that "... as we can finish the procedure with hypothesis selection, as discussed above". However, hypothesis selection has not been sufficiently discussed.
-
Is it true that the intuition given in Section 2.1. holds if ?
-
Typo line 178 (a a)
-
Is in lines 203-204 the number of components in the Gaussian mixture? Why does this statement ("if we altered data points...") only hold for changes?
局限性
NA
We would like to thank the reviewer for their detailed feedback. In the following we answer the raised questions.
-
To clarify this further, the goal of density estimation is to learn the overall distribution’s PDF (up to low total variation distance), whereas in parameter estimation we want to learn every Gaussian component’s mean and covariance. One can construct two different Gaussian mixtures with very similar PDFs, but with somewhat different components (in which case identifying the wrong Gaussian mixture is OK for density estimation but not for parameter estimation). We will add this discussion to the paper.
-
For the sake of clarity of the presentation we tried to avoid writing down the exact dependency on beta in our bounds. However, note that the sample complexity scales at most logarithmically with 1/beta. The reason is that any method that achieves a failure probability of < 0.50 can be boosted to have failure probability of beta with a mild (logarithmically in 1/beta) increase in the sample complexity. This can be done by running the estimator on log(1/beta) data sets, and then simply running private hypothesis selection on the outcomes. We will add this discussion to the paper.
-
The choice of is just for convenience, and the effect of the constant 100 on the sample complexity is negligible (i.e., the sample complexity does not change in terms of the notation). In Line 157, we mention that the sample complexity (after the crude approximation is obtained) depends on , so even if this only blows up the sample complexity by .
-
Indeed, the is not unique, and the volume in Theorem 2.1 refers to the volume (i.e., Lebesgue measure) of the set of all possible , for a fixed choice of dataset . As an example, if (in which case is just a variance) and every between and satisfies the score function, then the volume is 3. In general, we use higher dimensional Lebesgue measure (see Appendix C.1) to formally define the volume.
5 and 6. We thank the reviewer for their suggestions about improving the presentation of the paper and the appendices. We will use the remaining space to give more details in the main paper (and will also improve the structure of the appendix to make it easier to navigate)
- We would like to emphasize that our work focuses on the fundamental problem of determining the number of samples for learning a GMM privately. However, our algorithm is not computationally efficient. Note that even without privacy, it is not known whether GMMs can be learned efficiently in high dimensions. This remains an intriguing open problem in the field of computational statistics.
Minor comments:
Regarding Theorem 1.5, we note that will be some universal constant, so it can be hidden in the notation. The intuition in Section 2.1 holds as long as , so it also holds if . In lines 203-204, the use of is a mistake - we will use (or another variable) to avoid confusing it with the number of components . For all other comments, we agree with you and we will incorporate your feedback.
I thank the authors for the clarifications. I believe the paper merits the publication, and I maintain my initial rating.
This paper investigates the sample complexity of privately learning mixtures of Gaussians. The authors achieve a sample complexity of approximately where is an upper bound on the condition number of the covariance matrix and the norm of the mean. This result improves upon the previous best bound of . Additionally, they constructed a lower bound of , which refutes a conjecture from prior work, and they achieve optimal sample complexity when .
优点
The improvement in sample complexity is significant. The paper provides a thorough technical overview of the upper bound, detailing the incorporation of sample compression and the methods used to enhance dimension independence within the robustness-to-privacy conversion technique.
缺点
The paper is technically dense, making it challenging to understand and verify all the details. The authors do not fully utilize the page limit, which could have been used to explain the techniques more clearly. The technique for establishing the lower bound is not discussed in the main body of the paper. Including formal definitions and crucial lemmas in the main text could help readers better understand the key ideas. For instance, the definition of volume was unclear until I consulted the appendix.
问题
- Where does the term come from in the upper bound?
- Are there alternative methods to address this problem that do not rely on the robustness-to-privacy conversion technique?
局限性
Yes
We would like to thank the reviewer for their detailed feedback. We would like to emphasize that our bound does not depend on the condition number of the covariance matrix nor on the magnitude of the mean, i.e., our sample complexity has no dependence on . (Otherwise, proving an upper bound would have been easy, e.g., by using private hypothesis selection on a finite cover for the set of possible mixtures.) We agree with the comments about the presentation of the paper, and will improve it in the next version (including adding more details about the definitions,lemmas, or proof sketches in the main paper). In the following we answer the raised questions.
- To explain the term, the point is that for the private algorithm to work, we need two things. First, the private algorithm finds a covariance matrix (or more precisely, a mean-covariance pair ) with low score (see Line 932 and the above few lines for the definition). Second, if a mean-covariance pair has a low score, then it is actually similar to a true mixture component . The term is needed for the first part. However, the term is needed for the second part: that any mean-covariance pair of low score is similar to some true mixture component (this is the goal of Proposition E.4 in our paper).
The reason for the dependence in this second term is nontrivial so here’s a high-level intuition. Given a dataset and mean-covariance pair , the score is small if there exists roughly fraction of data points that “look like” they came from . The point is that one can have one point come from each of different mixture components which, together, look like samples from a totally different Gaussian from any of the mixture components. As a result, we need to make sure we have more than total points, because then an fraction of the data points is more than total samples. We actually need an additional factor of , because it turns out that we can even have points from each of the mixture components which look like samples from a totally different Gaussian.
- The only other known approach for privately learning unbounded and high-dimensional GMMs (density estimation) is [AAL24] that uses sample-and-aggregate. For the univariate setting (density estimation for GMMs with unbounded parameters), there is another approach that uses stability-based histograms [AAL21].
Thanks for the response. I maintain my score.
This works focuses on the task of density estimation of a mixture of Gaussians under the restriction of differential privacy. Unlike parameter estimation, density estimation does not require accurately estimating the mixture's parameters, but instead bounds the total variation distance between estimated and true distributions. This task can be achieved even without any boundedness or separation assumptions on the parameters of the components.
This task is known to require sample size of even in the non-private setting, where is the number of components, is the dimension, is the bound on the TV distance, and the notation ignores logarithmic factors. In the private setting, previous work [1] achieved accuracy guarantee under -DP with (the exact bound includes several other terms that depend on as well).
This work provides an improved bound, reducing the dependence on the parameters to in the high dimensional regime, in the univariate setting, and lower bounds which asymptotically match the upper bound in the univariate case and nearly match it in the multivariate one.
The proposed algorithm relays on first achieving crude estimation of the parameters and then using hypothesis selection to improve the estimation. The crude estimation is achieved using robust GMM estimation techniques and robusteness-to-privacy conversion, based on an inverse sensitivity-like instantiation of the exponential mechanism. This method is computationally inefficient.
[1] Mohammad Afzali, Hassan Ashtiani, and Christopher Liaw. Mixtures of gaussians are privately learnable with a polynomial number of samples. In Algorithmic Learning Theory, pages 1–27. PMLR, 2024.
优点
The results presented in this work provide a significant improvement over the previously known ones. Though the proof technique is highly involved, the authors presented the proof outline in a relatively intuitive way, and provided motivation for the various steps it includes.
缺点
Despite the great work done by the authors and my best efforts, I was not able to follow up all the proof structure. In particular, I could not find the justification for some of terms that appeared in the bound presented on Theorem 1.3. It seems to me like a section that "puts everything together" will be of great benefit. I will try to describe my current understanding and existing gaps.
To the best of my understanding:
- The and were explained at the "Reducing to “crude approximation”" section, and they represent the sample size required to accurately and privately learn the GMM given some crude estimation of its components, using hypothesis selection method.
- The was explained at the "Improving Dimension Dependence" section, where results from the fact points are required to get the crude estimation of the parameters of a single component under -robustness, which then can be translated to a DP estimation with accuracy using rubousteness-to-privacy conversion techniques (Theorem 2.1), and the additional term results from advanced composition over components.
- I failed to understand where the additional two terms ( and ) come from, and I suspect they were accumulated at some point during the rubousteness-to-privacy transformation.
问题
As I mentioned before, I will find an additional section that brings all the components together to outline the final proof method, focusing on the final achieved bound, to be very useful.
局限性
N/A
We would like to thank the reviewer for their detailed feedback. We are happy to add some description that puts everything together and explain where each of the terms come from, as suggested by the reviewer. In the following, we explain the terms in the sample complexity that the reviewer asked about.
To explain the term, we note that in the crude estimation part, the sample complexity (from applying Theorem C.3, the formal version of Theorem 2.1) also has an assumption (see the “Volume” bullet) that . Here, represent the privacy terms for a single iteration of the crude estimation (to learn one parameter), and will represent the robustness threshold, and ends up being roughly . The reason the robustness threshold depends on like this is that each component on average has weight , so you can corrupt fraction points and completely alter a component. Also, since we run the crude estimation times, we can use advanced composition to say that if each iteration was -DP, the overall algorithm is -DP. With all of these parameters set, we precisely get .
To explain the term, the point is that for the private algorithm to work, we need two things. First, the private algorithm finds a covariance matrix (or more precisely, a mean-covariance pair ) with low score (see Line 932 and the above few lines for the definition). Second, if a mean-covariance pair has a low score, then it is actually similar to a true mixture component . The previous terms of and are needed for the first part. However, the term is needed for the second part: that any mean-covariance pair of low score is similar to some true mixture component (this is the goal of Proposition E.4 in our paper).
The reason for the dependence is nontrivial so here’s a high-level intuition. Given a dataset and mean-covariance pair , the score is small if there exists roughly fraction of data points that “look like” they came from . The point is that one can have one point come from each of different mixture components which, together, look like samples from a totally different Gaussian from any of the mixture components. As a result, we need to make sure we have more than total points, because then an fraction of the data points is more than total samples. We actually need an additional factor of , because it turns out that we can even have points from each of the mixture components which look like samples from a totally different Gaussian.
I thank the authors for their explanation, and hope it will be reflected in the final version, so that all readers will have the opportunity to fully comprehend this important result.
The paper studies the problem of learning a mixture of -dimensional Gaussians using a differentially private mechanism with respect to the samples. It provides an improved sample complexity which has asymptotically optimal dependence on the dimension for small , the lower bound is also given by the paper. Additionally, for the paper provides optimal sample complexity.
The paper follows a high level approach of obtaining crude approximations of the mean and covariances of the component Gaussians, which can be used to reduce – using a net based argument – the problem to private hypothesis selection for which existing algorithm (needing only log the size of the net number of samples) can be used. To obtain the crude approximations respecting the DP guarantee, the paper uses a variant of the exponential mechanism with a carefully constructed scoring function measuring the distance between the a candidate Gaussian and any “heavy” component of Gaussian mixture. This uses a number of samples depending on the size of the sample set and the dimensionality the hypotheses . A naive approach fails due to the blowup incurred, and the paper utilizes sample compression to reduce the dependence to and improves the dimensionality dependence to via an estimation argument.
The above is outlined in more detail in the main paper and proved formally in the appendices. While all the details have not been verified by the reviewer, the approach taken by the paper seems correct.
优点
- The paper proves improved (and in some cases optimal) sample complexity for privately learning Gaussian mixtures.
- The paper uses a novel crude approximation based approach paired with existing algorithm for private hypothesis selection.
- The paper leverages inverse sensitivity mechanisms for decoding the crude approximations, techniques for sample compression, and combines them with a way to improve the dimensionality dependence.
- The main result as well the one for univariate case and the lower bound together constitute notable progress on a well studied problem.
- The paper is well written and the provided roadmap greatly aids the understanding.
缺点
- The sample complexity does not match the lower bound in the dependence on .
- The degradation on the dependence on from the univariate to the multi-variate case is not explained in the main paper.
- The details of the various technical parts are tedious and could be alleviated by better presentation, e.g. by listing all the parameters and their dependencies in a table.
问题
It will be useful to combine definitions 1.1 and 1.2 into a precise definition of privately learning GMMs.
局限性
Yes, addressed.
We would like to thank the reviewer for their detailed feedback. We now address some of the issues/questions raised by the reviewer.
Note that the algorithm for the univariate case (Section F) is completely different from the multivariate case (Section E). The algorithm in the univariate case requires us to order the data points from smallest to largest, which doesn’t make sense in high dimensions. So, it only works for . We could use the multivariate algorithm when as well, but we would get a worse dependence on (more specifically, we would get in the bound rather than linear dependence on ).
We are happy to list our results, along with previous results, in a table so that one can easily compare our work with previous work (as well as compare upper/lower bounds). We also agree that a final definition combining Definitions 1.1 and 1.2 would be useful, we will add that immediately after these two definitions.
I acknowledge the rebuttal by the authors. Just to clarify, in my weakness no. 3 comment, I had suggested listing the parameters used in the different proofs and their dependencies in table(s). This would make the proofs a bit more accessible in my opinion and I hope the authors will look into it. Overall my rating of Accept remains unchanged.
We thank all of the reviewers for their helpful feedback.
We apologize for the difficulty in reading the paper. We will make changes as suggested by the reviewers to improve the readability of the paper, such as adding a table of results, adding some additional technical description to the main body (up to space limitations), and adding some outline and summary sections in the appendix so that it will be more structured and understandable.
The paper considers the problem of estimating k-component and d-dimensional Gaussian mixtures with central differential privacy. Previous algorithms required k^2 d^4 samples and the proposed work shows that kd^2 + k^1.5 d^1.75 + k^2 d samples suffice. In the high-dimensional setting, when d >>k^2, they further show that this bound is optimal. They also show that sample complexity of learning one dimensional Gaussian mixtures is linear in k. The results are interesting and I recommend acceptance.
As reviewers mention, the writing is dense and based on my own read of the paper, I completely agree. I strongly urge authors to utilize the full page limit and improve the writing in the final version of the paper by addressing reviewer concerns. It would also be great if authors can add additional intuition on how each of the terms in the sample complexity arise and also highlight the technical novelty that enabled this result.