Online learning meets Adam: The Road of Interpretable Adaptive Optimizer Design
摘要
评审与讨论
The paper builds on a series of existing works to provide some theoretical analysis of Adam. Existing works have shown that Adam could be framed as an instance of the online-to-nonconvex conversion from (Cutkosky et al 2023), therefore the performance of Adam could be analyzed through the dynamic regret of the online learning algorithm it corresponds to. This paper seeks to extend this argument in two ways. First, the online learning algorithm underlying Adam is analyzed without the projection to a bounded domain. Second, the paper extends the online-to-nonconvex conversion itself towards last iteration properties.
优点
Analyzing Adam is one of the central topics in deep learning optimization, therefore the context of this paper is relevant to the machine learning community. As far as I can see, credits are given properly to the existing works this paper builds on.
缺点
Although this paper raises some good points, the overall quality is in my opinion below the acceptance threshold.
First, it seems that the existing works this paper builds on already analyzed discounted FTRL without projection, such as Theorem B.2 of (Ahn et al 2024). In this regard the first contribution claimed by this paper is not new. It also seems that existing works by Ahn et al used the projected version of FTRL mainly for analytical convenience, and most of their results can be derived analogously for the version without projection, if someone is willing to do the tedious extension. So it remains unclear to me what is the new insight from the first part of the paper.
Ahn, Kwangjun, et al. "Understanding Adam optimizer via online learning of updates: Adam is FTRL in disguise." arXiv preprint arXiv:2402.01567 (2024).
The second part of the paper claims to extend the framework of Cutkosky et al to last iterate convergence, but actually the final results are not based on the last iterate, as the output still needs to be sampled from the trajectory of the iterates. Again the message here is unclear.
What's more important is that the paper claims to provide better insights on Adam, but it's unclear why the two extensions from the paper are significant in this context. I could see the paper being motivated from an analytical perspective, but for people who only want to understand why Adam is effective, what is the takeaway?
Writing needs to be thoroughly improved, as I find it hard to follow some of the arguments, such as the part of Section 4 before 4.1. There are also technicalities swept under the rug, such as the hyperparameter a in Lemma 4.1 and the associated requirements. Is there any reason we should expect those requirements to hold?
问题
Related to the above, the significance of the paper needs to be further justified, specifically for the purpose of understanding Adam.
We thank reviewer ecRp for the valuable feedback. Below, we try to address each of the reviewer’s concerns in detail.
-
We thank the reviewer for pointing out this clarity issue. Theorem B.2 in [1] includes the term , which is the source of the clipping operation in the following work [2]. This result indicates that the upper bound on regret depends on the maximum value of the incremental . However, rather than providing a generalized formula, a problem-dependent characteristic (value) is expected in a bound. Thus, in the following work, [2] proposes an explicit clipping strategy to achieve a clear upper bound.
This background motivates our own approach, where we address this issue and remove the need for such unrealistic clipping. We hope this addresses the main concern. -
Considering the result of last-iterate convergence of SGD for convex problem in [1], it shows that (also the high probability version: with some probability). Our statement on the last-iterate guarantee offers the insight that the last iterate has a higher likelihood of being selected compared to other iterations, distinguishing it from the last-iterate guarantee statements in [3,4]. We have clarified this distinction in the revised paper on line 452. We believe this approach aligns with practical practice, where the final few iterations are typically prioritized and guaranteed. And, it improves from the previous result in [3] that spreads convergence evenly across all iterations, and offers useful insights for future research in this area.
-
We thank the reviewer for pointing out this clarity issue. This work emphasizes a more interpretable design principle for Adam, refining prior results by eliminating the unrealistic clipping operation through initially analyzing the involvement of and offering the insight that the last iterate has a higher likelihood of being selected compared to other iterations, which also improves from previous work.
We acknowledge that a critical question remains: “Why is Adam effective?”. Prior research, as noted, suggests that “SGD achieves the minimax optimal convergence rate under smooth nonconvex conditions,” yet Adam often outperforms SGD in practice. The effectiveness of Adam over other methods is another emerging topic in analyzing Adam, however, this work initially focuses on interpretable Adam design and does not provide such insights. We do hold great interest in expanding on this topic in future work. -
As noted in the discussion, the practical values typically used () in applications do not conform to the theoretical conditions, a limitation also observed in prior work [1]. This presents an interesting avenue for future investigation.
[1] Ahn, Kwangjun, et al. "Understanding Adam optimizer via online learning of updates: Adam is FTRL in disguise." arXiv preprint arXiv:2402.01567 (2024).
[2] Kwangjun Ahn and Ashok Cutkosky. Adam with model exponential moving average is effective for nonconvex optimization. arXiv preprint arXiv:2405.18199, 2024.
[3] Liu, Zijian, and Zhengyuan Zhou. "Revisiting the last-iterate convergence of stochastic gradient methods." arXiv preprint arXiv:2312.08531 (2023).
[4] Xiaoyu Li, Mingrui Liu, and Francesco Orabona. On the last iterate convergence of momentum methods. In International Conference on Algorithmic Learning Theory, pp. 699–717. PMLR, 2022.
Dear Reviewer, We sincerely appreciate your effort to review our paper and provide valuable feedback. As the discussion period is limited, we kindly request that you consider the new information and clarifications provided in our rebuttal. We are committed to improving our paper and addressing the concerns raised. If there are any remaining issues or aspects that require further clarification, we would be more than happy to provide additional explanations or adjustments.
Thank you for your rebuttal. I appreciate the technical refinements in this paper compared to previous works, but they don't seem to come with genuinely novel ideas, particularly for the purpose of understanding Adam. I have to retain my score as well.
The paper studies the important problem of understanding ADAM's convergence for non-convex optimization problems via online learning algorithms. Specifically, the paper considers the -FTRL which has been shown to correspond to a version of ADAM. The main technical contributions are:
- Removing the gradient clipping present in prior works to obtain an algorithm closer to the practically implemented version of ADAM.
- Obtaining last iterate convergence guarantees.
I want to note that I have very limited knowledge in the field of online learning.
优点
The paper removes the clipping used in priors works on -FTRL, which makes the algorithm more realistic and closer to the real ADAM algorithm.
The paper obtains last iterate convergence guarantees of the order for a general class of non-convex optimization problems.
缺点
[1] This work does not survey prior works' results thoroughly -- i.e, it does not state the assumptions and convergence rates obtained in prior works on ADAM.
[2] In Assumption 2.1, the bounded gradient assumption along with Assumption 2.2 of bounded domain seem very stringent. Can these be relaxed?
[3] The paper introduces the FTRL framework but does not introduce the full ADAM algorithm in detail. It would be helpful to show how it differs from the Clip-free FTRL version introduced in this paper.
Minor Comments:
In page 5, the parameter can be introduced in a better way. I had to read it off of the algorithm given below the paragraph referencing , which made it very confusing.
Line 272 states " becomes independent of and when and are appropriately chosen". This is imprecise since Lemma 4.1 shows that is independent of these quantities.
问题
See weaknesses
We thank the reviewer JZLb for the positive feedback. Below we try to address the questions of the reviewer.
-
In our study, particularly in the fourth point of Assumption 2.1, the bounded variance assumption for the stochastic gradient, combined with the G-Lipschitz assumption, implies a bounded gradient. Assumption 2.2 further constrains the power of the comparator . These assumptions align with standard conditions for online-to-nonconvex conversions, as referenced in line 106.
Additionally, our work examines the classical Adam algorithm through an online learning framework, which introduces some differences in assumptions compared to prior research that typically assumes smooth nonconvex conditions, as mentioned in the related work section. Since we focus specifically on the online learning approach, we have not conducted an extensive survey of results under the smooth nonconvex assumption, but we plan to expand on this aspect upon acceptance. -
In the discussion section, the discrepancy between the Adam update and the clip-free FTRL update is noted, specifically the absence of bias correction terms in the clip-free FTRL formulation.
-
Thank you for identifying these clarity issues. The modifications have been implemented in the revised manuscript.
Thank you for your response. I choose to retain my score.
This paper provides an analysis of non-smooth non-convex optimization that fixes some undesirable properties of previous work in an effort to be closer to the empirically successful adam algorithm. Specifically, some recent prior work developed a similarity between certain online learning algorithms and adam using an “online to non-convex conversion”. These algorithms made use of a strange “clipping” operation - essentially clipping the adam update to some fixed diameter . Moreover, the convergence to critical points is provided only for a random iterate rather than perhaps the more desirable last iterate. This paper attempts to fix both issues.
优点
The approach is intuitive, and the problem is interesting. I think if the weaknesses below could be addressed I would raise my score.
缺点
I have some concerns about the correctness of the results.
Lemma 4.1: the selection of seems to be impossible in general. Why should I expect to be able to do this? If any stochastic gradient happens to be zero (i.e. imagine your final loss is a hinge-loss and you happened to have a large margin on some example), then clearly must now be . However, derails the analysis as it forces us to move to a non-discounted regime.
In Theorem 4.2 there is also a significant issue I think: it looks to me that the actual result should be rather than in the numerator. Notice in line 867 to 870 in the proof it appears that the definition was applied incorrectly to get a outside the square root rather than inside.
In general, we should be expect this change even without looking at the proof for a mistake: notice that in the “natural” setting where , the “correct” value for the FTRL regularizer would be for losses . Since the provided regularizer is off by a factor , we should expect that to show up in the regret bound.
Regarding the last-iterate guarantee: I do not understand how it is a last-iterate guarantee. The theoretical results seem to be about still a randomly selected iterate (although admittedly with more weight on the last iterate). This still requires randomization over all iterates though, not what I usually think of when people say a last-iterate guarantee.
Mild stylistic comment on the proofs: there is a lot of use of here, but I don’t know what this means. If you mean the standard "implies" arrow, then it is not correct since in many of these cases all the should then be pointing the other way since these arguments are often being used to prove the initial statement, not the final statement. As it is, I just completely ignored these arrows, and I recommend they be removed and/or possibly replaced with better explanation of the logic in appropriate cases.
Also, “it is equal to verify” is not proper grammar - try instead “it suffices to verify”.
问题
Can the issues in the proof or the last iterate guarantee be fixed?
We sincerely thank reviewer ezs8 for the feedback. Below, we try to address each of the reviewer’s concerns in detail.
-
a We thank the reviewer for pointing out the boundary condition issue where , i.e., the squared norm of stochastic gradient after conversion, which results in an undefined value for .Generally, a zero stochastic gradient aligns with the stopping criterion in standard optimization methods. However, for the online learning method discussed here, is also `problematic’, as it yields a zero linear loss, i.e., . This does not provide feedback to inform the next update. Fortunately, zero loss, which does not impact the regret bound, offers some flexibility in handling boundary values.
To maintain consistency with the proofs of Lemma 4.1 and Theorem 4.2, we propose skipping updates when zero loss is encountered: if , we freeze the updating of index , which results in omitting the zero term from subsequent summations and keeping the intermediate state at step identical to that at step . This ensures that , consistent with Lemma 4.1. Meanwhile, by removing zero-error loss terms, subsequent steps inherit the immediate state and continue updating according to the iterative rule, as required by Theorem 4.2. These modifications are now reflected in the revised paper on lines 268 and 678. -
\sqrt{1 - \beta_{2*}1 - \beta_{2}} We thank the reviewer for this correction, which has been addressed in the revised text at lines 879 and 304.
-
Considering the result of last-iterate convergence of SGD for convex problem in [1], it shows that (also the high probability version: with some probability). Our statement on the last-iterate guarantee offers the insight that the last iterate has a higher likelihood of being selected compared to other iterations, distinguishing it from the last-iterate guarantee statements in [1,2].
We have clarified this distinction in the revised paper on line 452. We believe this approach aligns with practical practice, where the final few iterations are typically prioritized and guaranteed. And, it improves from previous result in [3] that spreads convergence evenly across all iterations, and offers useful insights for future research in this area. -
All suggested stylistic changes have been incorporated in the revised manuscript.
Once again, we thank reviewer ezs8 for the valuable feedback and welcome any further suggestions.
[1] Liu, Zijian, and Zhengyuan Zhou. "Revisiting the last-iterate convergence of stochastic gradient methods." arXiv preprint arXiv:2312.08531 (2023).
[2] Xiaoyu Li, Mingrui Liu, and Francesco Orabona. On the last iterate convergence of momentum methods. In International Conference on Algorithmic Learning Theory, pp. 699–717. PMLR, 2022.
[3] Kwangjun Ahn and Ashok Cutkosky. Adam with model exponential moving average is effective for nonconvex optimization. arXiv preprint arXiv:2405.18199, 2024.
Regarding the skipping of zero updates: I don't think this really resolves the issue. What happens if there are some gradients that are very small but not zero, say ?
You could plausibly resolve this by adding some noise to the gradients or carefully tuning a "when to ignore the gradient because it's norm is too small" parameter, but this would destroy any adaptivity. Perhaps this is ok - even a "non-adaptive" clip-free FTRL might be interesting since the adaptivity of these algorithms is not very well characterized anyway.
However, I think there is a more important confusion: I don't follow the final bound in Theorem 5.3, especially in light of the changed factor. First, this bound doesn't seem to show that the gradient "norm" goes to zero as , so it's not clear if the algorithm finds critical points. My impression is that with , we'd need also, which would make to not decay with (it would have decayed with the rather than in the numerator of the regret bound).
We thank reviewer ezs8 for the additional comments and valuable insights!
Theorem 5.3 represents a relatively independent result, incorporating the previous method -FTRL. However, we acknowledge the validity of your concern regarding the fixed error when applying the proposed method under the current conditions.
A more refined selection of hyperparameters, such as , , and , has the potential to address this issue. For instance, choosing values closer to 1 may help mitigate the observed fixed error. Additionally, as discussed in the manuscript, bounding the increments introduces larger errors and tighter restrictions on the selection range of . However, this trade-off also opens up possibilities for achieving more relaxed conditions under additional assumptions.
Dear Reviewer, We sincerely appreciate your effort to review our paper and provide valuable feedback. As the discussion period is limited, we kindly request that you consider the new information and clarifications provided in our rebuttal. If there are any remaining issues or aspects that require further clarification, we would be more than happy to provide additional explanations or adjustments.
This paper studies how to interoperate the Adam algorithm with discounted online learning algorithms. The authors propose an online learning algorithm that does not need the projection operation, which aligning more closely with Adam’s practical implementation. They also provide last iterate convergence guarantees for the -FTRL algorithm, but with a non-diminishing error.
优点
-
This paper studies the online learning interpretation of the Adam algorithm, which is a interesting topic. Previous interpretation (Ahn & Cutkosky, 2024) requires the clipping (projection) operation, where this paper propose to avoid this operation via a careful configurations of , which looks like a novel contribution.
-
The authors also provide a last iterate convergence guarantee the -FTRL alorithm, which is a interesting result.
缺点
I have the following major doubts/questions:
In terms of and : In the proof, it seems that and are considered as a fixed constant. However, in the Key Lemma (4.1), both and are time variant parameters. It leads to the following questions about the parameter : 1) it seems that depend on the algorithm it self (line 292). Therefore, it is unclear to me how to find a universal to make sure such an inequality holds. It we use different at different rounds, can the proof still hold? will be monotone? what is the scale of a and beta?
Line 698: : why is it true? we know , so should equal to . and are totally different.
Since the setting of is critical for achieving clip-free, I believe understanding the points mentioned above are very important to make sure the proof is rigorous.
Question on the theoretical guarantees: In Theorem 4.2, what is the dependance on ? Does has an upper bound?
Other comments/questions:
Presentation: As mentioned above, I think the parameter setting is very unclear to me, and I hope the authors could provide more explanations.
问题
Please see the Weakness section.
We sincerely thank reviewer GbeF for the feedback. Below, we try to address each of the reviewer’s concerns in detail.
-
\beta_{2* as a constant} We thank the reviewer for pointing out this issue. In the revised manuscript, we specify that belongs to a specific range rather than being time-variant. This change, reflected in line 696 of the revised manuscript, adjusts our findings, particularly in the setting of , where its magnitude is now , compared to the previous time-variant . We hope this addresses the main concern raised.
-
We thank the reviewer for this correction, which has been addressed in the revised text at line 696.
-
\eta The parameter is tunable and typically selected manually. Consistent with prior work, the optimal value of depends on specific problem characteristics. For our study, we set to achieve the regret bound presented in Theorem 4.2, as outlined in line 876.
-
v_{t*} In the conversion of the online learning method to an optimization algorithm, the environment feedback represents the stochastic gradient. In our work, particularly in the fourth point of Assumption 2.1, the bounded variance assumption for the stochastic gradient combined with the G-Lipschitz assumption does imply an upper bound. It is worth mentioning that Assumption 2.1 contains the standard conditions for non-convex and non-smooth optimization, as referenced in line 106.
Once again, we thank reviewer GbeF for the valuable comments and welcome any additional suggestions.
Dear Reviewer, We sincerely appreciate your effort to review our paper and provide valuable feedback. As the discussion period is limited, we kindly request that you consider the new information and clarifications provided in our rebuttal. We are committed to improving our paper and addressing the concerns raised. If there are any remaining issues or aspects that require further clarification, we would be more than happy to provide additional explanations or adjustments.
This paper explores the theoretical foundations of the Adam optimizer, particularly focusing on its relationship with online learning algorithms. The authors propose a clip-free FTRL method that removes the gradient clipping present in previous works, offering a theoretical framework that aligns more closely with the practical implementation of Adam.
Reviewers raised significant concerns regarding the validity of the proofs and the novelty of the contributions. Several reviewers questioned the treatment of parameters, as well as the assumptions made, particularly those related to bounded gradients. Additionally, the last-iterate convergence claims were disputed, with reviewers pointing out that the results still rely on sampling from the entire trajectory rather than focusing solely on the last iterate.
审稿人讨论附加意见
The authors have partially addressed the reviewers' concerns. However, due to significant issues in the theoretical analysis, I believe the paper requires a full review after substantial revisions.
Reject