Optimal Hypothesis Selection in (Almost) Linear Time
We introduce the first near-linear-time algorithm for hypothesis selection that achieves optimal accuracy, significantly advancing previous methods.
摘要
评审与讨论
This paper studies the hypothesis selection problem: Given distributions and samples from an unknown distribution , the goal is to output such that , where is the TV-distance of the best hypothesis.
It is known that achieving a guarantee with is impossible, if we want the sample complexity to be bounded by a function of and (and not the domain size). Prior work gave various algorithms that achieve (including some with ), but none of them achieves in near-linear time without making additional assumptions.
This work gives two improved algorithms. The first algorithm (Algorithm 1) achieves in time, where is the sample complexity. The second algorithm (Algorithm 4) achieves in time.
Here is an overview of Algorithm 1, outlined by the authors in Section 3:
- Let denote the subset of the domain that witnesses . A simple argument shows that finding that minimizes the quantity gives the guarantee. However, the straightforward implementation takes time to compute for all .
- The actual algorithm maintains estimates on s (that always underestimate). These are grouped into "buckets", where bucket contains indices with estimated .
- In each iteration, we find the lowest non-empty bucket , and tries to update the estimates. Concretely, we loop over all and only sample a few indices from bucket . For each pair, we try to update the estimate of using . This might bump some into larger buckets.
- If there is a single that bumps a substantial fraction of the sampled s, we call such a "prompting hypothesis", and we use such to update all estimates in the current bucket. Note that if we keep finding prompting hypotheses, we will empty a bucket in repetitions, and the whole process must end in iterations. Moreover, each iteration is almost linear-time.
- The harder part is to argue that, whenever we cannot find a prompting hypothesis, we are done. At a high level, this boils down to showing that the optimal hypothesis must be prompting, unless the majority of hypothesis in the lowest bucket are already good enough.
优点
- This work makes progress on improving the runtime for a fundamental problem in statistics / machine learning.
- The new algorithms are based on clever ideas and several new concepts, which might find applications in future work.
- The writing is very clear and I found the main paper relatively easy to follow. I especially liked that the authors spent most of the main paper on the proof overview, which is enjoyable to read and should be able to convince the readers why the proposed methods work.
Overall, this paper contains fairly significant results that are very well presented and should be of interest to most learning theorists. I vote for acceptance.
缺点
The main weakness is that this work is unlikely to have said the last words on the problem---it remains open whether one can get in time.
Minor writing suggestions:
- Equation (2): Missing extra space in on the left-hand side? (The notational convention in the rest of the paper seems to be , though personally writing might be clearer.)
- Line 90: "a factor" --> "an factor"
- Line 270: "dependency" not capitalized.
问题
Is there a reason why the "right" runtime should be ? For example, given that the algorithm needs time to read the inputs---can we hope to get an runtime?
局限性
Limitations are adequately addressed in Section 1.2.
Thank you for your feedback.
In response to your question, there is no explicit lower bound for the time complexity of this problem. As you mentioned, it is possible that an algorithm could exist with a time complexity of just . However, for algorithms that operate by querying the probabilities of various sets according to an unknown distribution —such as querying semi-distances or similar concepts, as many previous algorithms have done—we speculate that queries to the semi-distances are both necessary and sufficient. This would result in an overall time complexity of . We will clarify this in the future versions of our paper.
Evidence suggesting that queries might be sufficient comes from the upper bound of [MS08]. They demonstrated that there exist sets to query, whose responses are sufficient to solve the problem. In their work, these sets and the order in which to query them are determined through an exponential time preprocessing step over the class . It would be valuable to explore whether this problem can be solved with queries without relying on the preprocessing step.
Thank you for your reply! I have no further questions, and my overall evaluation of the paper remains positive.
This paper studies the hypothesis selection problem, where given a set of candidate distributions and samples from a distribution , the learner wants to approximately select a distribution such that for some constant , where is the TV distance of the closest distribution in to . Information theoretically, is the optimal constant one could achieve with optimal sample complexity . Previous efficient algorithms either have a running time that has a bad dependence on or run in nearly linear time but achieve a suboptimal approximate factor . In this paper, the authors design an algorithm that achieves the optimal approximate factor , with running time . This is the first algorithm that achieves the optimal approximate factor with a running time nearly linearly dependent on . This paper also presents another algorithm that achieves an approximate factor but has a better running time
优点
This paper makes progress on a fundamental problem, hypothesis selection. The two algorithms designed by the authors are highly non-trivial and involve many novel techniques. The paper is in general written well and gives a good intuition of why the algorithms work. Due to the time limit, I was not able to check the proofs in the appendix but given the discussions in the main body, I believe the results given by the paper are correct.
缺点
I want to give some minor suggestions.
-
It would be nice to give some more applications or motivations at the beginning of the introduction. In the current version of the paper, the problem sounds mathematically interesting but lacks motivation.
-
Since there is a long line of works that design nearly linear time algorithms for the problem. It would be helpful to give a brief review of the techniques used in prior works in the introduction.
-
In line 327, I believe there is a typo in the equation
问题
I only have one question about the model. It seems that the algorithms in the paper crucially rely on estimating the semi-distance defined based on the Scheffé set. But I believe in many applications it would be very hard to check whether an example falls into the Scheffé set of two hypothesises. For example, many learning algorithms might first improperly learn a list of candidate hypothesises and use hypothesis selection to select a good one. Computing pdf of these candidate hypothesises may not be realistic in general. Would it be possible to relax the dependence on the knowledge of the pdf of the candidate hypothesises in the model?
局限性
N/A
Thank you for your feedback. We will include the motivations and applications of hypothesis selection, as well as an overview of previous algorithms, in our paper. In this rebuttal, some applications of hypothesis selection are discussed in our response to Reviewer fMmw, for your reference.
Regarding your question about relaxing the model, the short answer is yes; our algorithms remain effective with some minimal adjustments as long as approximations of Scheffe sets are provided.
Currently, in our paper, we compare the PDF of the known distributions, , to determine whether belongs to , the Scheffe set of and . However, this assumption can be relaxed if we can identify an alternative set that captures most of the discrepancy between and . In particular, it is sufficient to have: Even with these imperfect sets, is a good proxy for the quality of hypothesis (up to additive error ). See the equations after Line 180 in the paper.
Thanks for the response.
This paper introduces proper approximation algorithms for optimal hypothesis selection in the context of distribution learning. The first algorithm achieves an optimal approximation factor of in time approximately linear in the number of hypotheses. The second achieves the slightly looser approximation factor in exchange for significant reduced dependence on and , the standard learning parameters governing the error and confidence of the learning algorithms. The paper focuses on explanation of the algorithmic techniques involved.
优点
The paper introduces a novel learning algorithm for optimal approximation (Algorithm 1) which achieves a significantly reduced dependence (quadratic to near-linear) on the number of hypotheses in the hypothesis class in the computational complexity. This is notable given that the problem has been studied for a while, and practical, given that distribution learning over a discrete space requires a number of samples at least proportional to the cardinality of the domain on which the hypotheses are supported.
缺点
I found the writing to be unpolished, and not publication-ready. Much of the body attempts to furnish intuition to the algorithmic techniques introduced, but some portions seem underdeveloped, and occasionally read like direct translations from math to spoken language (e.g. line 358). For example, an extremely important concept in this paper (introduced in important prior works) is that of ``semi-distances'' . Seemingly the best intuition the reader is provided with regarding this concept comes at line 176: "One suggestion for readers to internalize the semi-distances is to view them as a distance between and that is measured from the perspective of ". I think this deserves more illuminating wording, and personally got a much better understanding by staring longer at the definition. Overall, I think the lack of attention to writing is somewhat a shame, as there seems to be a lot of nice algorithmic thought here which deserves a better exposition in my opinion.
On a more technical level, the dependence in the computational complexity of Algorithm 1 on the learning parameters and is heavy -- these enter as . One would really hope for something like . It seems there will be many regimes in which the guarantees of Algorithm 1 -- despite the linear dependence on the number of hypotheses -- will be looser than the quadratic algorithm of MS08.
问题
-Table 1: Is the dependence on in just the standard ?
-line 180: The order of logical quanitifiers can be reversed to get a slightly stronger and more illustrative statement here, correct?
-line 240: It seems that to do this random sampling to decide if you have a prompting hypothesis, you need to check a number of hypotheses which is proportional to . I guess there is a naiver version of this algorithm which just checks all of the hypotheses at this step, incurring some quadratic dependence in . Am I wrong in feeling like any sort of search for something like a promping hypotheses will always incur some sort of undesirable multiplicative interaction with ?
局限性
Yes
Presentation: Thank you for your feedback regarding the presentation of our paper. Our overview was intended to provide a high-level description of our algorithm to avoid obscuring the main technical ideas with detailed specifics. We will certainly focus on enhancing the clarity and quality of our write-up in future versions.
Dependency on and : Please see our global rebuttal.
Below is our answer to your questions:
-
In Table 1, for [ABS23] the dependence on is , while for the rest it is (or unspecified by the authors).
-
Correct. There exists an such that for every , determines the quality of .
-
Correct. To ensure that a random hypothesis in a bucket is not too far, with a probability of in a single round, we have to sample hypotheses and check if H_{i^} is prompting them. Or, we can try all hypotheses in the bucket. Hence, relying on this structural property of makes a polynomial dependency on inevitable. We speculate that improving the dependency on for this algorithm would require new algorithmic ideas. The main difficulty here is that there is no (known) general technique for boosting the confidence parameter while keeping the same. In many settings, the success probability of a learning algorithm can be amplified from a constant, say 2/3, to at least at a cost of at most in running time and sample complexity. However, in hypothesis selection, choosing the best output from several runs of a given algorithm requires executing a second hypothesis selection algorithm, which introduces another factor of in the approximation—leading to a total factor of at least . As a result, these kinds of two-phase algorithms are not sufficient in the low regime. Some previous results, such as [ABS23], also suffer from a polynomial dependency on . Our second algorithm circumvents this polynomial dependence at the cost of a slightly worse accuracy parameter .
Thanks for the reply. I do have a concern that the algorithmic techniques for the case may not be very informative for future algorithmic development re: the discussion on prompting hypotheses above. However, in retrospect, I think this is probably a bit unfair, as I'm very far removed from the study of this particular problem. Thus, I revise my score to 5.
I do strongly encourage the authors do improve the presentation in the next version. Good work deserves good presentation.
Thank you for your comment and for raising your score.
We understand your concern regarding the direct usability of our algorithms in future work. While we cannot guarantee their applicability, our hope is that the novel perspective we introduced, along with the new algorithmic ideas, will lead to improvements in the time complexity of this problem. We will invest more effort in distilling the new algorithmic ideas and structural results into a form that is widely useful.
This paper looks at proper distribution learning: given samples from some distribution p, and a set of n candidate distributions H_i, output one H_i that is close in TV; in particular, alpha * OPT + eps. Surprisingly, this is possible with a sample complexity independent of the domain size (as would be needed to actually estimate the TV). There's been a long line of work aiming to improve the approximation factor alpha. alpha = 2 is possible for improper learners, but proper learners can only hope for alpha = 3. Getting alpha=3 in n^2 s time is known; this paper gets that down to O~(ns) time (although an extra eps factor in time, which they can avoid for alpha=4, and worse delta dependence).
优点
It gives much better running time for a natural problem.
The approach is fairly clean.
缺点
The algorithm overview is a bit vague, and could be written more clearly.
The failure probability dependence isn't good.
The alpha=3 approach is a fairly simple extension of prior work, and alpha=4 isn't so exciting.
I'm not sure that the constant here matters for the applications of hypothesis selection. Like, in the application where we choose a cover for the class, presumably we can just make a finer cover?
问题
Are there applications of hypothesis selection where the constant alpha matters?
局限性
Fine.
Presentation
Thank you for your feedback regarding the presentation of our paper. Our overview was intended to provide a high-level description of our algorithm to avoid obscuring the main technical ideas with detailed specifics. We will certainly focus on enhancing the clarity and quality of our write-up in future versions.
Novelty of our techniques
The essence of our first algorithm indeed stems from the minimum distance estimate [DL01]. However, the primary challenge we faced was implementing this approach in linear time. Specifically, it is not feasible to compute the maximum semi-distances, , for all hypotheses in linear time. To address this, we developed an efficient and advanced method to estimate these values by taking the maximum over a small subset of semi-distances, while ensuring the algorithm's correctness. This approach allows us to achieve an algorithm that runs in (almost) linear time in .
Our second algorithm is completely novel to the best of our knowledge. It achieves the desired time complexity (up to logarithmic factors) with , an accuracy parameter that surpasses all previously known results. Our work introduces a novel algorithmic approach to this problem, marking a significant departure from existing techniques. We hope these new ideas will inspire future research.
Dependency on and
Please see our global rebuttal.
Applications of hypothesis selection
Density estimation is a fundamental problem in statistics, with hypothesis selection being an important special case. It involves choosing the best distribution from a set of known models that represent potential underlying data distributions. For example, this set might include Poisson and gamma distributions with various parameters to model the number of arrivals per time unit. This technique is applicable in areas such as denoising, anomaly detection, selecting interpretable data distributions, strategy selection, and more.
That said, we view our results as a fundamental theoretical tool. Hypothesis selection has been instrumental in learning structured distributions (e.g., learning mixture of Gaussians [DK14, SOAJ14]). For additional references, see Section 1.3. Another significant aspect of hypothesis selection is its agnostic nature, allowing for learning even when the unknown distribution is not within the considered class. Hence, hypothesis selection is applicable even when data is noisy.
Addressing the importance of improving by a constant factor
In most learning algorithms, the error guarantee decreases polynomially as the number of samples increases, so constant factors may not be as crucial. However, this is not the case in hypothesis selection. The output hypothesis is guaranteed to be -close to in total variation distance. While increasing the number of samples can reduce to negligible levels, it does not improve the term . is an inherent property of the algorithm and directly impacts the best achievable error guarantee. Therefore, even a constant improvement in is significant.
Some may argue that focusing on improving is more beneficial than refining . For instance, in the cover method (as you mentioned), using a finer -net can ensure that . However, this approach can drastically increase the algorithm's running time, as the size of the net can grow super-polynomially with . For example, in the case of mixtures of Gaussians, the dependence of the net size on is roughly (see [SOAJ14]). Thus, reducing by a factor of three could increase the size of by an exponential factor in , and consequently, the running time, leading to an inefficient algorithm.
Thank you for your response.
Thank you for taking the time to review our paper and for your feedback.
Presentation: Thank you for your comments regarding the presentation of our paper. We will incorporate all your editorial suggestions regarding the presentation of the paper. We will certainly focus on enhancing the clarity and quality of our write-up in future versions.
Dependency on and : We acknowledge that, ideally, an algorithm should achieve a running time of . Our first algorithm, with , does not meet this ideal due to suboptimal dependencies on and . However, it marks a significant step forward as it is the first in two decades to achieve a time complexity linear in for any . To address the shortcomings, our second algorithm achieves the desired dependencies on and , up to polylogarithmic factors, with a slight increase in the accuracy parameter (). Given the significant departure from previous algorithmic approaches required by our work, we hope that our techniques will inspire further progress in this area.
This paper presents a new algorithm for a natural distribution learning problem that has been well-studied in the theory ML literature. It obtains the first linear time algorithm (linear in the number of choice distributions) that achieves a constant factor approximation (in fact, the best known approximation for a proper learner). All of the reviewers agreed that this was an interesting result, even if the paper falls slightly short of fully resolving the problem (it has a sub-optimal dependence on the statistical error, epsilon). There was also concern that the paper is no sufficiently polished, but we believe the author discussion period will help the authors improve their draft of the paper, so are recommending acceptance.