Universal Online Convex Optimization with $1$ Projection per Round
摘要
评审与讨论
This paper introduces methods for constrained OCO which automatically achieve the optimal rate without knowing in advance whether the losses are convex, strongly convex, exp-concave, or smooth, while using only 1 projection per round. This is notable because the standard approach proceeds by combining several expert algorithms with a meta algorithm; in constrained settings these expert algorithms require implementing a potentially expensive projection. This work avoids projecting each of the expert algorithm iterates leveraging the constrained-to-unconstrained reduction of Cutkosky 2020.
优点
The paper addresses a clear and real problem that has been left unaddressed by the majority of literature on this topic. The approach is a pretty straight-forward modification of existing reductions, but uses them in a new and unexpected way.
缺点
The main weakness is that the paper feels poorly factored. There is a very large number of back references to previous equations, and the paper would be very hard to read in print. To actually follow the math, it's almost necessary to read the paper with a pdf viewer which can display pop-up previews when hovering over links. I think this would be remedied by better factoring the results into lemmas and propositions.
As noted, the approach is a fairly straight-forward modification of the results from Cutkosky & Orabona (2018) and Cutkosky (2020), and essentially boils down to not dropping negative terms in the analysis, and then exposing matching terms in the regret decomposition. I think this is fine overall; these considerations are missing from the literature, and this is a fitting place for them to enter the literature.
问题
Do you think there could possibly be a more abstract way to formalize these universal algorithms? The strange thing about these universal algorithms is that a whole new algorithm seemingly needs to be devised every time one wants to incorporate a new kind of loss (e.g. meta-grad -> mahler to handle strongly convex losses). The ideal result would more generally be a reduction which just passes the losses to each of the experts, maybe with some additional side-information, and lets them construct whatever surrogate loss they want with it. In this way there might just be one "final" paper on universal guarantees.
局限性
Yes, the paper points out in the conclusion that the bounded domain / gradient assumption is a significant limitation that they hope to address in future work.
Many thanks for the constructive reviews! We will revise our paper accordingly.
Q1: The main weakness is that the paper feels poorly factored.
A1: We apologize for the confusion caused by the numerous references to previous equations in this paper. Due to page limit, we attempted to include all the math equations of our key ideas in this paper, which made it difficult to follow in the printed version. We promise to polishing our writing and will follow your suggestion to factor the results into lemmas and propositions.
Q2: Do you think there could possibly be a more abstract way to formalize these universal algorithms?
A2: Yes, there do exist a general framework in the studies of universal online learning that allows expert-algorithms to process the original functions [Zhang et al., 2022]. We are pleased to briefly describe the basic idea of their method here. Following the two-layer structure, the regret can be decomposed to the meta-regret and the expert-regret, that is,
where and denote the output of the meta-algorithm and expert in the -th round. To bound the expert-regret, we can directly use existing online algorithms as the expert-algorithms that achieve optimal regret bounds for different types of convex functions, e.g., and . To control the meta-regret, their meta-algorithm employs the linearized loss, i.e., , to measure the performance of each expert. Furthermore, they also require the meta-algorithm to yield a second-order bound in terms of . In this way, the meta-regret is small for exp-concave functions and strongly convex functions, i.e., , and is also tolerable for convex functions, i.e., . By incorporating existing online algorithms as experts, their approach inherits the regret of any expert designed for strongly convex functions and exp-concave functions, and also obtains minimax optimal regret for convex functions. We feel that this framework is close to the "final" paper on universal guarantees that you mentioned.
We acknowledge that our work indeed follows this universal framework. However, to address new problems, we have to make essential modifications to this framework. In this paper, we focus on reducing the projection complexity of universal online learning, and have made innovative technical contributions based on this framework. These include specially designed expert-losses for expert-algorithms that handle strongly convex functions, and novel theoretical analysis that advances universal algorithms and the black-box reduction.
Great! Thanks for the detailed explanation of Zhang 2022, this does indeed seem to be the kind of result I was hoping for
I believe the authors will be able to factor the results and polish the writing between now and the camera-ready. I think these are important results and the approach is quite nice, so I've raised my score to a 7.
Dear Reviewer VeB9,
Many thanks for your kind reply! We will further enhance our paper according to your constructive reviews.
Best regards,
Authors
This paper addresses the challenge of online convex optimization with unknown smoothness properties of the loss functions, which can be convex, strongly convex, or exp-concave. The authors propose an algorithm that achieves regret bounds of order , , and respectively, while requiring only a single projection step per round on the original domain . Such projections can indeed be computationally expensive. Additionally, the authors present regret bounds with improvment for small losses.
Most algorithms that achieve similar adaptive regret upper bounds rely on meta-algorithms that combine experts (running ONS or OGD with surrogate losses), inspired by the MetaGrad algorithm. Typically, these algorithms necessitate projection steps per round (one per expert), which can be computationally burdensome. Mhammedi et al. (2019) reduced this projection cost to but at the expense of a regret for strongly convex losses. To overcome this, the authors introduce new surrogate losses based on a black-box reduction technique by Cutkosky et al. (2018), which simplifies the constrained optimization problem on to another domain, such as the Euclidean ball, where projections are easier.
优点
- The paper is well-written and offers valuable insights into the use of surrogate losses to adapt to strong convexity or exp-concavity. It may serve as a comprehensive entry point into the extensive literature on universal OCO algorithms.
- Despite combining various existing techniques, the results are non-trivial and required solving technical challenges, especially for the strongly convex case. The authors introduce novel negative terms in the analysis to achieve their results.
- Experiments included in the appendix demonstrate that the computational improvements can be significant.
缺点
- The theoretical improvements may appear incremental, appealing to a niche audience interested. The improvement being only in the specific case of strongly convex regret with projection steps. The primary high-level ideas in the algorithm and analysis are based on prior work.
- The paper still relies on a meta-aggregation procedure, which, although theoretically effective, is not particularly elegant and maintains a per-round complexity of order . Achieving complexity per round seems however highly challenging.
- The convex rate is actually , not as stated in the results.
问题
- The algorithm requires prior knowledge of the parameters and , would simple doubling trick allow to tune these?
- Your algorithm still requires O(1) projection steps on and O(log T) projection steps on . Do you think that projection free algorithms such as variants of Online Frank Wolfe could be used instead of OGD and ONS (up to deteriorating slightly the rate) to remove all projections (or at least the O(log T) on ) while still being adaptive to the smoothness?
局限性
The authors have addressed the limitations.
Many thanks for the constructive reviews! We will revise our paper accordingly.
Q1: Would simple doubling trick allow to tune prior knowledge of the parameters and ?
A1: In fact, the doubling trick enables our proposed algorithm to avoid the prior knowledge of , at the cost of an additional in the regret bound. However, this trick is inadequate for removing the requirement of the prior knowledge of , as the variability in function gradients is unknown. Therefore, designing lipschitz adaptive online learning algorithms is a highly challenging task and merits further exploration in the future.
Q2: Your algorithm still requires projection steps on and projection steps on .
A2: We would like to clarify that, in contrast to the expensive projection operations onto the complicated domain, projection operations on the surrogate domain are much cheaper and can be considered negligible. In this work, we construct a ball as the surrogate domain. For the OGD algorithm, the projection operation of the reduced algorithm on can be realized by a simple rescaling, i.e., if ; otherwise , where denotes the unprojected decision and denotes the decision on the ball . For the ONS algorithm, there exists an efficient implementation of ONS in the literature, where the time-consuming projection operation can be replaced by a more efficient singular value decomposition [Lemma 7, Mhammedi et al., 2019]. After combining all the experts' decisions by a meta-algorithm, we project the solution in onto , which is the only projection onto per round.
Q3: Do you think that projection-free algorithms such as variants of Online Frank Wolfe could be used instead of OGD and ONS to remove all projections while still being adaptive to the smoothness?
A3: Thanks for your insightful comments! We can choose projection-free algorithms, such as variants of Online Frank Wolfe, as the expert-algorithms. However, given the current studies on projection-free algorithms (summarized in the table below), this approach may suffer a deterioration of the regret bound and can not handle certain cases.
More specifically, as is shown in the table, in the literature, there are no suitable projection-free algorithms for exp-concave functions, neither for strongly convex and smooth functions. Moreover, when functions are smooth, existing projection-free algorithms are unable to achieve problem-dependent bounds, such as the small-loss bounds in this work.
| Algorithm | Condition on Loss | Regret Bound |
|---|---|---|
| OFW [1] | convex | |
| OSPF [2] | convex and smooth | |
| SC-OFW [3] | strongly convex | |
| AFP-ONS [4] | exp-concave and smooth |
Finally, we would like to highlight that although using projection-free algorithms can remove all projections, they may not achieve greater efficiency based on the universal framework. Specifically, most projection-free algorithms, such as OFW and its variants, replace the original projection operation with a linear optimization step. Since the universal framework requires maintaining expert-algorithms, this approach needs to perform linear optimization steps per round, which can be time-consuming when is large.
References:
[1] Hazan and Kale. Projection-free Online Learning. ICML, 2012.
[2] Hazan and Minasyan. Faster Projection-free Online Learning. COLT, 2020.
[3] Wan et al. Projection-free Online Learning over Strongly Convex Sets. AAAI, 2021.
[4] Garber and Kretzu. Projection-free Online Exp-concave Optimization. COLT, 2023.
Thank you for your response. This addresses some of the points I raised. I have no further questions and will await the discussion with the other reviewers.
Dear Reviewer aw3E,
Thank you very much for your kind reply! We will further improve our paper.
Best regards,
Authors
This paper studies universal OCO algorithms with fewer projections. Previous work either use projections per round, or have a sub-optimal dependence on for strongly-convex loss. This work designs a new surrogate loss to achieve tight regret for Lipschitz convex/exp-concave/strongly-convex losses simultaneously, with only 1 projection per round.
优点
The technical contributions are solid: this paper makes a strict improvement over previous results.
The paper is very well-written, clearly introducing the challenges and the main ideas. Details of the analysis and algorithm are nicely explained.
缺点
The contribution seems somewhat incremental to me. The only improvement is a factor for strongly-convex loss. Such result is nice to know but I'm not sure how significant such it is. In addition, the technical novelty isn't significant either.
问题
See weaknesses.
局限性
See weaknesses.
Many thanks for the constructive reviews. We will revise our paper accordingly.
Q1: The significance of our result.
A1: We emphasize that the -dependence is significant in online learning studies. We provide the following elaborations and will also revise the paper to highlight this significance more clearly.
- For online convex optimization, knowing the minimax optimality regarding and is a fundamental question to investigate. It is well-known that for convex/exp-concave/strongly convex functions, the minimax rate is , and . Therefore, in universal online learning, achieving the regret bounds that match these minimax rates is also of great significance. As explicitly mentioned by the authors of MetaGrad, van Erven et al., the extra factor in the regret bound for strongly convex functions is a "real limitation". Our results have successfully achieved minimax optimality in terms of and through non-trivial technical ingredients.
- Numerous studies in online learning are dedicated to improving regret bounds by reducing dependence on the . For example, prior work on private OCO achieved an regret bound [2], where hides polylog factors in . In a subsequent study, this result is improved to [3], which is a significant advancement. Other examples include improving -dependence for multi-armed bandits, bandit convex optimization, etc.
Therefore, we believe that the result in this paper is significant.
Reference:
[1] van Erven et al. MetaGrad: Adaptation using Multiple Learning Rates in Online Learning. JMLR, 2021.
[2] Jain et al. Differentially Private Online Learning. COLT, 2012.
[3] Kairouz et al. Practical and Private (Deep) Learning Without Sampling or Shuffling. ICML, 2021.
Q2: The significance of our technical novelty.
A2: We would like to take this chance to emphasize our technical novelty.
- Challenge: This work focuses on universal algorithms with projection per round. Prior work applied the black-box reduction to existing universal algorithm, i.e., MetaGrad, which achieves optimal regret for exp-concave and convex functions with projection per round. The basic idea is to cast the original problem on the constrained domain to an alternative one of the surrogate loss on a simpler domain . To handle strongly convex functions, a straightforward way is to use a universal algorithm that supports strongly convex functions, e.g., Maler, as the black-box subroutine. However, this black-box approach fails in deriving a tight regret bound for strongly convex functions, because we are unable to bound this term,
where is the fake decision on the surrogate domain, and is the true decision on the feasible domain. The above term is unmanageable since . To the best of our knowledge, this technical challenge is entirely new for both universal algorithms and the black-box reduction, with no prior work to draw upon.
- Analysis: In this work, we address the above challenge in two steps. First, we introduce a novel meta-expert decomposition and, motivated by this, we specifically design a novel expert-loss. This expert-loss offers the advantage of bounding the meta-regret by the following term,
Second, to bound the above difference, we dig into the properties of the surrogate loss of the black-box reduction. The technical novelty of our theoretical analysis is that the second-order bound in terms of the decision on the surrogate domain (fake decision) can be converted into the one of the decision on the feasible domain (true decision), at the cost of adding an addition positive term (See Lemma 7 for details), that is
where is defined in our paper. Furthermore, compared to previous work on black-box reduction, we also prove a tighter connection between the original function and the surrogate loss (See Lemma 3 for details), which introduces a negative term that can automatically offset the cost arising from the aforementioned conversion. By combining these two bounds, we are able to control the meta-regret under the strong convexity condition, thus ensuring that it does not affect the algorithm's optimality. Therefore, we believe that the technical novelty in our theoretical analysis is valuable for both universal algorithms and the black-box reduction.
- Applications: Two-layer algorithms are widely adopted in modern online learning, including universal online learning studied in our paper and also non-stationary online learning. Therefore, our developed techniques can be also useful for non-stationary online learning, especially regarding strongly convex/exp-concave functions. For example, Baby et al. [2022] propose a two-layer algorithm with optimal dynamic regret for strongly convex functions, which also suffers from projection complexity. By applying our technique to a strongly adaptive algorithm with a second-order bound, it is hopeful to reduce the number of projections of their method to per round.
Overall, we believe that the technical novelty of our work is significant. Thanks for your comments, and we will revise the paper to further highlight the technical novelty.
References:
[4] Baby and Wang. Optimal dynamic regret in proper online learning with strongly convex losses and beyond. AISTATS, 2022.
Thank you for your detailed response!
Given that optimal dependence on is already known for each sub-case, the dependence improvement is on how we aggregate them to build universal algorithms. In my opinion, such dependence improvement is not as significant because the study of universal algorithms mostly remains on theory level so far: how useful universal algorithms are practically is in question, there seems much fewer scenarios that crucially require universal algorithms than those of each sub-case.
As a result, I will maintain my score.
The paper considers the problem of handling various function classes like convex, exp-concave and strongly convex via 1 algorithm. This sub-area called universal algorithms has been studied significantly in literature. The typical solutions are running log T experts at different learning rates. The default such solution requires log T projections,1 for each expert. This was resolved by Mhammedi et al who gave an algorithm with 1 projection. That algorithm gets the right rate for convex and exp concave but misses the right rate for strongly convex suffering an additional d factor. This is the rate that this paper improves. They do so by changing the problem to the domain of a sphere via a construction by Cutkosky and Orabona.
The paper is pretty well written and as the reviewers point out it lays out the results and the techniques pretty well. Overall the paper makes somewhat of a niche theoretical improvement in my opinion (shared by other reviewers), which is not very surprising. However since the reviewers have given a high score to the paper I am recommending acceptance.