Provable Benefit of Annealed Langevin Monte Carlo for Non-log-concave Sampling
We obtain the first non-asymptotic complexity analysis of annealed Langevin Monte Carlo for sampling non-log-concave distributions.
摘要
评审与讨论
The paper consider the problem of sampling from an unnormalized density that may be non-log-concave and multimodal using annealed MCMC. The authors present a non-asymptotic analysis of Annealed Langevin Monte Carlo, bypassing the commonly assumed log-concavity constraint in literature. The quantitative bounds derived are widely applicable and either comparable to or better than previous analyses for isoperimetry-free sampling methods.
优点
The paper provides, as far as I know, the first thorough and non-asymptotic analysis of the annealed Langevin Monte Carlo algorithm. The results obtained advance the understanding and analysis of annealed MCMC, potentially setting the stage for further research in this area. The intuition behind the algorithm and the proofs are well explained.
缺点
The discussion on the impact of the annealing schedule on the convergence rate is not sufficiently clear in the paper. How the choice of the annealing schedule influences the derived bounds? Specifically, in the example of mixture of Gaussians, is used, how varying affects the action ? whether there is an optimal value of that could minimize the convergence rate?
问题
Please see the weakness part.
Thank you for your appreciation for our work.
Weakness: How the choice of the annealing schedule influences the derived bounds? Specifically, in the example of mixture of Gaussians, is used, how varying affects the action ? whether there is an optimal value of that could minimize the convergence rate?
Response: In this example, we obtain a uniform upper bound of the action for all . We choose as intuitively, when is close to , the distribution is more diffucult to sample from, and we should move more slowly. We agree that for each problem, there should be an optimal value of the parameter that reaches the best sampling quality, yet studying the influence of on the action is highly non-trivial. This will be the focus of our study in the future.
We hope that the response above could successfully address your concerns.
The paper proposes a novel analysis of Annealed LMC which uses a specific scheduling system. The latter allows to benefit from another analysis scheme which uses the action on curves instead of isoperimetric inequalities.
优点
The paper provides a concise analysis with a clever mathematical trick of using the action of the curve instead of the isoperimetry constants. Assuming gradient Lipschitzness of the potential function allows to easily (rejection-acceptance) sample from an initial distribution, which then allows to sample from the target distribution (non-convex, gradient-Lipschitz) using an annealing schedule for LMC.
缺点
- Assumption 1 is not well discussed. One cannot verify beforehand whether the curve is AC. How should it be done? See the respective question in the section below.
- Assumption 2 is not the classic smoothness assumption. Instead it is the gradient Lipschitz continuity, which is stronger. This should be explicitly mentioned in the paper.
- The notation is slightly confusing. There are too many indices for and the usage is incoherent. The reading would be rather simplified, if the notation was more developed.
- Simple experiments on sampling from a mixture of Gaussians using the proposed and existing methods would significantly boost the paper.
- In the abstract of the paper, the authors claim that their analysis is non-asymptotic, however it is not true. Theorem 2 (main result) and related results are given with .
Mathematical comments
- The equation after line 1077 in the proof of Example 2 is incorrect. The density of the sum of two independent random variables(vectors) is the convolution of their densities rather than their product. Thus, the variable is not distributed as . This serves as the cornerstone of the remainder of the proof, which means that the proof is incorrect.
- The third line in the proof of Example 2 is incorrect. After recentering the Gaussians, the authors erase the terms with the sign. Since we have different 's, the resulting mixture weights will be different from . Nevertheless, this could be fixed by replacing by some 's.
- Paragraph starting at line 364 The authors motivate the use of annealing as the schedules start at a distribution , which is easy to sample from. The reason for the easy sampling according to the authors is the strong-convexity of the negative log-density . The latter is true when is large enough.
- Continuation of the previous point. Line 831 The strong convexity constant might be negative, unless is chosen to be large.
- Beginning of the proof of Theorem 1 is chosen to satisfy the Fokker-Planck equation on line 294. However, later on line 301 it is chosen to minimize the norm using Lemma 2. This is possible if also generates . Is it possible to find such ? See the relevant question.
typos
- line 956: cure -> curve
问题
- Is the analysis using the action only available for annealing? Can we gain anything by applying this technique to LMC in the standard setting?
- What are the estimates of the action parameter for a more general class of distributions than the mixture of Gaussians?
- Is it possible to guarantee that for certain target distributions and schedules Assumption 1 is satisfied? This is important, as it is not easy to manually check for every setting.
- Is there always a such that for all in (4) and it generates i.e. satisfies the Fokker-Planck equation?
Thank you for your careful reading of our paper and your valuable, constructive feedback.
Weakness 1 and Question 3: One cannot verify beforehand whether the curve is AC. Is it possible to guarantee that for certain target distributions and schedules Assumption 1 is satisfied?
Response: Absolute continuity is a fairly weak regularity condition on the curve of probability measures. Loosely speaking, as long as the curve has a density that is jointly w.r.t both and , then it is AC.
For instance, we can intuitively prove the proposition above in one-dimension. Let be the c.d.f. of , which is strictly increasing, and its inverse is well-defined. is also jointly . We know from standard textbook of OT that The above quantity should be as due to the property, and hence the curve is AC.
Weakness 2: Assumption 2 is not the classic smoothness assumption. Instead it is the gradient Lipschitz continuity, which is stronger. This should be explicitly mentioned in the paper.
Response: We agree that smoothness has several different definitions, such as being or . The definition we used in this paper, the Lipschitz continuity of the gradient (see the "notations and definitions" part in Sec. 1), is also a common one that appears in many standard textbooks in optimization and sampling, such as Sec. 3.2 of Bubeck's Convex Optimization: Algorithms and Complexity and Chap. 4 of Chewi's Log-concave Sampling. We appreciate your comment for the potential ambiguity.
Weakness 3: There are too many indices for and the usage is incoherent.
Response: Sorry about the confusion the paper might have caused you. Throughout this paper, is an AC curve bridging , a simple distribution, and , the target distribution, while is the curve after time-reparameterization. are the discrete time points on , and 's are the discrete time points on , with being the step size. We hope that this clarifies the usage of these notations.
Weakness 4: Simple experiments on sampling from a mixture of Gaussians using the proposed and existing methods would significantly boost the paper.
Response: Please see Sec. 6 for the experimental results, where we verified the findings in Ex. 2.
Weakness 5: In the abstract of the paper, the authors claim that their analysis is non-asymptotic, however it is not true. Theorem 2 (main result) and related results are given with .
Response: "Non-asymptotic analysis" refers to providing explicit bounds on algorithmic performance for fixed problem parameters, such as dimension , accuracy , and smoothness parameter , rather than focusing on behavior in asymptotic regimes.
To better explain the difference between asymptotic and non-asymptotic results, consider the following two examples:
- Central limit theorem (CLT) implies that for i.i.d. copies of a random variable with mean and variance , the law of converges to as , where is the mean of . Here, we only know the limiting behavior of the law of as ; for any finite , CLT does not imply the discrepancy (say, KL divergence, TV distance, etc.) between the law of and . Hence, this result is an example of asymptotic analysis.
- In optimization, we know that gradient descent has linear convergence for strongly convex and smooth function. More precisely, suppose is -strongly-convex and -smooth with global minimizer , then with step size , . This result not only implies that as the number of iterate , the iterate converges to , but also tells us how far we are at every finite step , and how many iterations are required to reach a point that is -close to the global minimizer, which is upper bounded by , i.e., . Hence, this result is an example of non-asymptotic analysis.
In Thm. 2, we establish the number of oracle calls required to obtain samples that are -accuracy in KL divergence, which is clearly an example of non-asymptotic analysis. The asymptotic notations and are only used to hide constant and logarithmic terms. We hope this clarifies the difference between asymptotic and non-asymptotic analysis.
Mathematical comment 1: The equation after line 1077 in the proof of Example 2 is incorrect.
Response: Thank you for your careful reading of our proof and for pointing out this potential issue. However, upon careful review, we believe that the proof as presented in the paper is correct. Here, . To sample from such a mixture of Gaussians with the same covariance, we can first sample the means w.p. , then convolute it with . Our constructions of is formed by picking the mean and convoluting it with Gaussian, which should be correct.
Mathematical comment 2: The third line in the proof of Example 2 is incorrect.
Response: Please note that we have assumed all 's have the same norm in this example, and hence all 's are the same and can be cancelled.
Mathematical comments 3 and 4: Paragraph starting at line 364. [...] The latter is true when is large enough. Line 831 The strong convexity constant might be negative, unless is chosen to be large.
Response: Your observation is correct. In Lem. 4, we have proved that with , the complexity of sampling from is .
Mathematical comment 5 & Question 4: Beginning of the proof of Theorem 1. [...] Is there always a such that for all in (4) and it generates i.e. satisfies the Fokker-Planck equation?
Response: The logic here is as follows. We first choose any vector field s.t. it generates . Such vector field exists given the absolute continuity of , see, e.g., Thm. 8.3.1 of Ambrosio et al., 2008 or Thm. 5.14 of Santambrogio, 2015. By the Fokker-Planck equation, this guarantees that for all in (4). Girsanov theorem implies that . Using Lem. 2, among all vector field that generates , we may choose the one that minimizes , which implies that .
Questions 1: Is the analysis using the action only available for annealing? Can we gain anything by applying this technique to LMC in the standard setting?
Response: We think that it is not applicable to LMC, as we are not running LD with a changing target distribution, and thus there is no action . However, this approach based on action may be applied to the study of diffusion model and classifier-free guidance, which we will study in the future.
Questions 2: What are the estimates of the action parameter for a more general class of distributions than the mixture of Gaussians?
Response: The estimate of for more general target distributions is hard to obtain, as the Wasserstein distance does not have close form in general. One possible way is to leverage Lem. 3, as long as we have upper bounds on the LSI/PI constant. Yet as Ex. 1 suggests, the obtained upper bound may be loose. This problem will be the focus of our future study.
We sincerely appreciate the time and effort you have dedicated to providing thoughtful feedback. We hope that our response has successfully addressed your concerns.
Weakness 1: The condition is indeed mild. However, for the sake of completeness, the theory should include a claim justifying why this assumption holds a priori. This claim needs to be explicitly stated and supported with a proof.
Weakness 3: Your explanation clarifies the notation well. Please include this discussion in the revised version for better readability and completeness.
Weakness 4 (Extended): The experiments do not compare the proposed method with existing approaches. Sampling from mixtures of distributions, such as those that can be addressed with LMC, is relatively straightforward (see Section 2.3 of [1]). Therefore, the paper should evaluate its method in more challenging scenarios than Gaussian mixtures.
Additionally, the ALMC results depend on the action parameter, which may become very large in the case of mixture distributions. Given that Gaussian mixtures are "easy" to sample from using Langevin methods, comparing the proposed approach with standard LMC under a global LSI condition is not particularly informative. While the proposed method might remain relevant in more general cases, the behavior of the action parameter under these conditions still needs thorough investigation.
Weakness 5: I understand your point, and I agree with it. What I intended to convey is that deriving convergence rates up to a constant would enhance the precision of the results.
Additional Comments: The rest of my concerns were adequately addressed, and I appreciate the effort made to clarify these points. As a result, I have increased my score. However, I still believe the paper requires further improvements, particularly in its experimental validation and theoretical completeness.
Reference:
[1] Dalalyan, Arnak S., and Avetik Karagulyan. "User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient." Stochastic Processes and their Applications 129.12 (2019): 5278-5311.
We would like to sincerely thank you for raising the rating and we appreciate your kind suggestions for improvement.
Several slight modifications have been made in the revised manuscript: we have included a new Lem. 4 in App. A, which is a formal statement of a sufficient condition for absolute continuity in the rebuttal, and have included the discussion on smoothness as a footnote on page 3.
For weakness 5, the complexity bound can actually written as , where is the dominant complexity for running ALMC, and the part is for sampling from . It is actually when , and otherwise (see Lem. 5). As the part is in general small, this complexity can be absorbed in .
Further experimental validation and theoretical completeness on more general non-log-concave distributions will be the focus of our future study. Thank you again for your valuable comments.
The paper proves new bounds on efficacy of Annealed Langevin MC by creating an SDE which approximates the target and then analyzing the discretization error introduced by simulating it on a finite machine
优点
Computational bounds are generic- a smoothness constraint for target distribution, rather than isoperimetry assumptions.
Explanations are clear and well-illustrated, suiting a broad generalist audience by mixing intuition-building and commentary.
The direct impact of this paper is unclear, but the methods it introduces are interesting contributions
缺点
The paper shares the weakness of most such oracle inequalities, which is that they reassure us of convergence of an algorithm under assumptions that are hard to verify on a particular problem
Note that I am not expert in all the components of the authors' proof, and have taken some steps on faith.
问题
What does -smoothness imply about the target distribution? Can we know this in general, especially where the density arises from a posterior? It seems like this quantity would be hard to calculate for even tractable likelihoods, let alone general posterior densities.
Thank you for your appreciation of our work and instructive feedbacks.
Weakness: The paper shares the weakness of most such oracle inequalities, which is that they reassure us of convergence of an algorithm under assumptions that are hard to verify on a particular problem.
Response: We agree that verifying these assumptions can be challenging in specific applications. However, our goal in establishing these conditions is to provide a broadly applicable theoretical framework that characterizes the behavior of annealed sampling in a general setting. These assumptions are intended to capture typical scenarios that arise in sampling from complex, multimodal distributions, allowing our results to extend to a wide range of problems.
Question:
-
What does -smoothness imply about the target distribution?
Response: Consider a target distribution , where the potential is -smooth. By definition, it means , which is equivalent to
\begin{cases}V(y)\le V(x)+\left\langle\nabla V(x),y-x\right\rangle+\frac\beta2|y-x|^2\\ V(y)\ge V(x)+\left\langle\nabla V(x),y-x\right\rangle-\frac\beta2|y-x|^2\end{cases}
Intuitively, this means the potential can be upper and lower bounded by quadratic functions at every point, meaning that it does not have regions of extremely high or low curvature. This assumption on the regularity of the target distribution aids in controlling the convergence of ALMC and in establishing the oracle complexity bounds.
-
Can we know this in general, especially where the density arises from a posterior? It seems like this quantity would be hard to calculate for even tractable likelihoods, let alone general posterior densities.
Response: Our paper focuses on the theoretical analysis of sampling, where we assume the potential is smooth to simplify the problem. In practice, the smoothness of the target distribution's potential requires case-by-case verification, and is calculable in some target distributions.
For instance, consider the Bayesian logistic regression problem: the parameter has prior distribution , and the likelihood is , where is the sigmoid function. Then the posterior distribution given paired data points is It is easy to verify that So for any unit vector , , which implies and . Hence, is smooth with constant .
We agree that verifying smoothness may be hard in some general cases, let alone computing the constant . Nevertheless, this assumption is common in non-asymptotic analyses, as it allows us to develop a clearer understanding of the algorithm’s behavior under idealized conditions, even if such conditions may be approximations in practical scenarios.
We hope that our response has successfully addressed your concerns.
The paper studies the convergence of annealed Langevin algorithms for general, possibly non-log concave, target distributions. The authors propose using an argument based on Girsanov's theorem to bound the distance between a reference process and a Langevin SDE that is based on an annealing schedule of target distributions, bridging the target to a base distribution. The authors also consider a practical implementation of the continuous time SDE, based on the exponential integrator scheme.
优点
The authors consider an interesting problem, which is actively researched. Most of the research has focused on obtaining convergence bounds for standard Langevin algorithms imposing strong assumptions on the target distribution, such as log-concavity or log-Sobolev inequalities. The approach in this paper seems original, since it differs from strategies that are typically used in the literature as explained by the authors. Moreover, it gives convergence bounds in KL for an annealed Langevin algorithm, for which few results are available, and under "weak" conditions. Overall, the paper is clear enough to follow what the authors are doing and has a cohesive story.
缺点
-
The authors consider a specific annealing schedule and it is not clear if their results give any kind of clarity on how the schedule affects the overall complexity of the algorithm.
-
In several occasions I would have appreciated clearer explanations. For instance, why considering the exponential integrator scheme instead of an Euler scheme? Why considering that specific annealing schedule? It is also not completely clear why one finds the topics in Section 2.4 at a first read. It would be beneficial to convince the reader that these mathematical concepts are needed, also giving an idea of how these are useful in what follows.
-
The sketch of the proof of Theorem 2 is not very clear to me and I find it should be heavily improved. It seems the authors compute the KL between two processes, neither of which is the discretisation scheme. Then the notation in the proof in the appendix seems inconsistent, denoting by \nu^{ALMC} the measure of another process. As I motivate in the Questions section below, I struggle to see how the authors use Lemma 1 in the proof of this result. This should be clarified if the paper is to be considered for acceptance.
-
the contribution is in general interesting, but I would have liked to find analyses such as some (simple) numerical simulations showing that the convergence of the algorithm is for instance not exponential in the dimension.
-
Table 1 gives an overview of results available in the literature, but these are for various algorithms which are denoted by corresponding acronyms. This is not a very clear comparison in my opinion, and it would be nice to have a short explanation at the differences between these algorithms.
-
the intuitive explanations in lines 318-323 are rather obvious and are not really clarified by the results in the paper. This should be clear to the reader to avoid confusion.
问题
-
Theorem 2 uses Lemma 1, that is a consequence of Girsanov's theorem. As far as I understand it uses it to bound the KL between the SDE in Equation (6) with the "true" SDE, that is Equation (4). However, the SDE (6) clearly does not fit into Lemma 1 of the paper, since a term in the drift depends on a past state. The authors should explain their reasoning and not just refer to Lemma 1, which is not general enough.
-
in Lemma 2, in what sense do the authors say the equality is "reachable"?
-
in lines 373-374 the authors state that the Euler scheme is "non-optimal", without a supporting reasoning. What are their reasons for writing that?
-
In Example 1, the authors say that \rho_1 is "readily sampleable". Is it then the case that \rho_0 is a simple distribution that can be sampled? Here the notation is not clear yet. Also, in point 2. of the example, are the LSI and PI constants exponentially dependent on d for the distribution \rho_t for any t? In general, this example could be made clearer.
Thank you for your careful reading of our paper and your valuable, constructive feedback.
Weakness 1: The authors consider a specific annealing schedule and it is not clear if their results give any kind of clarity on how the schedule affects the overall complexity of the algorithm.
Response: The continuous-time analysis we presented in the paper is general and applies to any AC interpolation. In practice, the affect of annealing schedule to the complexity is a crucial question.
In Lem. 3(1), we showed that for fixed and , the curve that has the minimial action is the constant-speed Wasserstein geodesic. However, this does not have close form except in some trivial cases such as Gaussian distribution.
To realize constant-speed Wasserstein geodesic, one may do time-reparameterization on the Wasserstein gradient flow. For example, consider the OU process , (target distribution) and . The OU process is the Wasserstein gradient flow of the functional , and after time-reparameterization of the curve , one may obtain a curve close to the constant-speed Wasserstein geodesic between and . This curve is widely used in diffusion model, but we do not have closed form of the scores in general cases. As our goal is to obtain the complexity of learning-free sampling algorithms, we do not study this approach in the paper. Instead, we focus on the curve in this paper. The effect of general annealing schedule to the sampling complexity will be the focus of our future study.
Weakness 2:
-
Why considering the exponential integrator scheme instead of an Euler scheme?
Response: Since exponential integrator scheme reduces discretization error compared with Euler scheme. Recall that the SDE of ALD is Assume the discretization time points are . On , the Euler discretization scheme regards the linear part in the drift term as unchanged, but actually the integral of on can be computed in closed form, and by doing this, the exponential integrator scheme reduces discretization error. This is also a standard approach in accelerating the sampling of diffusion models.
-
Why considering that specific annealing schedule?
Response: This specific family of interpolation curves is a popular choice in practice (see the last paragraph of section 5.1 for discussion). Moreover, another reason for choosing this specific family of interpolation curves is that the score functions have closed form, and therefore, unlike many other curves such as the Gaussian convolution curve , we do not need further training for approximating the score.
-
It is also not completely clear why one finds the topics in Section 2.4 at a first read.
Response: Section 2.4 provides some essential technical tools in optimal transport for establishing our results. For instance, Lem. 2 is used in the proof of Thms. 1 and 2 to minimize the integral of squared norms, and the action will turn out to be a crucial property of the annealing curve (see, e.g., Ex. 2). Lem. 3 is on the relationship between action and isoperimetric inequalities, which sets this approach apart from the traditional analysis based on LSI or PI. We have added several linking sentences to inform the readers of the purpose of introducing these concepts and properties.
Weakness 3: The sketch of the proof of Theorem 2.
-
The authors compute the KL between two processes, neither of which is the discretisation scheme.
Response: Please refer to Figure 1 for the illustration. The two processes are: the reference SDE (Eq. 4) with path measure , which is a continuous-time process, and the ALMC SDE (Eq. 6) with path measure , which is a discretized process, although we write it in the form of an SDE. The explicit update form for practical use is detailed in Alg. 1.
-
Then the notation in the proof in the appendix seems inconsistent, denoting by the measure of another process.
Response: is defined as the distribution of in the ALMC SDE (Eq. 6), i.e., the marginal distribution of the path measure at final time .
-
I struggle to see how the authors use Lemma 1 in the proof of this result.
Response: Please see the response of Question 1.
Weakness 4: I would have liked to find analyses such as some (simple) numerical simulations showing that the convergence of the algorithm is for instance not exponential in the dimension.
Response: Please see Sec. 6 for the experimental results, where we studied the complexity w.r.t. , the same norm of the means. As discussed in the respoonse of question 4 below, in this case we show an improved bound on instead of , which we have corrected in the paper.
Weakness 5: Table 1 [...] It would be nice to have a short explanation at the differences between these algorithms.
Response: Thanks for your advice. We have modified the paper and added the discription of the algorithms in Sec. 1, before the table.
Weakness 6: The intuitive explanations in lines 318-323 are rather obvious and are not really clarified by the results in the paper. This should be clear to the reader to avoid confusion.
Response: In Thm. 1, we have shown that for any given AC curve of probability measures and a sufficiently long time , by reparametrizing the curve and running ALD from time to , then the KL divergence from the target distribution to the obtained distribution at time is . This shows that with longer , we are moving more slowly and the discrepancy between the target and the obtained distribution diminishes. This suggests the intuitive explanations in this paragraph.
Question 1: The SDE (6) clearly does not fit into Lemma 1 of the paper, since a term in the drift depends on a past state.
Response: Thank you for the advice. The Girsanov theorem also works for the case where the process is adaptive (not necessarily Markov). More specifically, consider a probability space on which is a standard -dimensional Brownian motion. We say that is adaptive w.r.t. the filtration if is a -measurable random vector for all . For instance, in our paper, the process may take the differential form , where is a piecewise constant function with , corresponding to the time point of discretization. In this case, the Girsanov theorem has the form For more details, please check Thm. 3.2.6 of the book Log-concave Sampling. We have modified the paper accordingly.
Question 2: In Lemma 2, in what sense do the authors say the equality is "reachable"?
Response: This means for any AC curve of probability measures , there exists a vector field that generates , such that for Lebesgue-a.e. . We have modified the statement of this lemma accordingly, and hope that this would be clearer to the reader.
Question 3: In lines 373-374 the authors state that the Euler scheme is "non-optimal", without a supporting reasoning. What are their reasons for writing that?
Response: Here, by "non-optimal", we are expressing that there exists a better discretization scheme of the SDE 3, in the sense that the discretization error can be reduced. The reason for non-optimality is explained in the paragraph after it, and the key reason is that the integral of the linear term can be computed in closed form.
Question 4:
-
In Example 1, the authors say that is "readily sampleable". Is it then the case that is a simple distribution that can be sampled? Here the notation is not clear yet.
Response: In this example, can be any general target distribution (which may not be easy to sample from), and is the convolution of and a Gaussian noise with high variance, which is easier to sample from (i.e., readily sampleable). As the noise level is large enough, samples from can serve as approximate samples from . Note that it is different from the later setting where is a distribution easy to sample from, and is the target.
-
Are the LSI and PI constants exponentially dependent on for the distribution for any ?
Response: Sorry about the mistake we made here: after carefully checking the reference, we found that the dimensional dependence may not be exponential; however, the dependence of the maximum distance between the modes should be exponential in general. We have modified Ex. 1 and its proof accordingly.
To illustrate this, consider a simpler case where there are only 2 Gaussian components: . Schlichting, 2019 has shown that , while Dong & Tong, 2022 has shown that when , . Hence we conclude that the LSI/PI constants of would depend on the maximum distance between the modes exponentially in this setting when is close to , but the dimensional dependence might not be exponential.
We sincerely appreciate the time and effort you have dedicated to providing thoughtful feedback. We hope these our response aligns with your expectations.
I thank the authors for their clarifications and modifications. I have changed my grade to 6.
Dear reviewer, thank you for raising the score! We would like to know if there‘s any additional concern that you still have regarding this work, and we would be eager to address them thoroughly.
We would like to extend our sincere gratitude to all the reviewers for their insightful comments and valuable feedback. We are pleased that the reviewers recognized the importance of the annealed sampling problem addressed in the paper (reviewer 4z2B), appreciated the novelty of our technical contributions (reviewers 4z2B, KRFS, WqFj, WbhJ), and commended the clarity of our presentation (reviewers 4z2B, KRFS, WbhJ). Our analysis introduces a novel framework for understanding annealing algorithms through the action of the interpolation trajectory, setting our work apart from existing bounds by avoiding dependence on isoperimetric inequalities. We believe this framework is going to significantly advance the understanding of annealed sampling algorithms and pave the way for future research directions, including the design of annealing schedules.
While all reviewers agreed on the novelty of our approach, two primary concerns were raised:
- The need to improving the writing in several parts of the paper to enhance both accuracy and readability for a general audience.
- The lack of simple numerical results to validate the theoretical findings presented in the paper.
In our revised manuscript, we have made substantial improvements to address these concerns, providing clarifications for all raised questions. Notable changes in the main paper are marked and enumerated below:
- We have added descriptions of the algorithms compared in Tab. 1.
- The statement of Girsanov's theorem (Lem. 1) has been modified to include cases where the drift terms in the SDE depend on the past state.
- We have clarified the statement "the equality is reachable" in Lem. 2. and expanded the discussion on the motivation for introducing OT concepts in Sec. 2.4.
- An error in Ex. 1 related to the dimensional dependence of LSI/PI constants has been corrected, along with adjustments to the corresponding proof in App. A.
- We added a footnote in the proof of Thm. 1 for rigor.
- In Sec. 6, we added a numerical experiment to verify the findings in Ex.2, and showed that the dependence of the number of iterations required to achieve a certain accuracy in KL divergence depends polynomially on .
- Finally, we slightly modified the layout of the paper to fit the additional content.
We have also addressed all concerns in the individual responses below. We hope that the changes we have made reflect our commitment to improving the quality of our work and that the revised manuscript meets your expectations.
This paper provides a non-asymptotic analysis of annealed MCMC. Authors establish an oracle complexity of a simple variant of annealed Langevin Monte Carlo and provide convergence guarantees for this algorithm to the target in KL divergence. Target is assumed to have a , smooth potential and the rate is in terms of action of a curve of measures that interpolates the target, which is an intriguing parameter.
This paper was reviewed by four reviewers the following Scores/Confidence: 8/4, 8/3, 6/4, 6/4. I think the paper is studying an interesting topic and the results are relevant to ICLR community. The following concerns were brought up by the reviewers:
-
It is not clear if the rate is tight (most likely it is not). As is, the result seems to be another upper bound on sampling from a class of targets.
-
Assumptions are not discussed in detail and compared with standard ones in the literature. This should be improved.
-
Even though this is a theoretical paper, a detailed experimentation section would make the paper much stronger.
-
minor typos were pointed by the reviewers. These should be carefully addressed. One reviewer requested detailed explanation of the integrator.
Authors should carefully go over reviewers' suggestions and address any remaining concerns in their final revision. Based on the reviewers' suggestion, as well as my own assessment of the paper, I recommend including this paper to the ICLR 2025 program.
审稿人讨论附加意见
Reviewer questions are thoroughly answered by the authors. The revision provides colour-coded updates.
Accept (Poster)