Online Inverse Linear Optimization: Efficient Logarithmic-Regret Algorithm, Robustness to Suboptimality, and Lower Bound
We present an efficient $O(n \ln T)$-regret method for online inverse linear optimization, extend it to suboptimal feedback, and provide an $\Omega(n)$-regret lower bound.
摘要
评审与讨论
This paper addresses the problem of online inverse linear optimization. In this framework, at each round, the learner receives a feasible optimization set along with the optimal result of an oracle call (referred to as the agent). The learner aims to recover the underlying linear parameter from these observations by minimizing cumulative regret.
Previous work has shown that a black-box reduction to online convex optimization can be applied, yielding regret upper bounds of order or . However, the logarithmic regret rate was only achieved by methods with a per-round time complexity of , which is polynomial and thus impractical for large .
In this work, the authors present a simple reduction that allows the use of Online Newton Step (ONS) with a loss function inspired by that of MetaGrad. This approach achieves a regret of order while retaining the per-round computational complexity of ONS---i.e., constant per round, with the main computational bottleneck being the generalized projection step.
Moreover, by applying MetaGrad instead of ONS---which automatically tunes the parameters of ONS---they show that their method remains robust even when the agent is suboptimal. Specifically, their regret scales as rather than linearly, where denotes the cumulative suboptimality of the agent.
Finally, the authors provide an lower bound (on a specific instance), demonstrating the optimality of their method up to a logarithmic factor in .
优缺点分析
Strengths:
The paper is well-written, and the analysis is, as the authors claim, relatively simple. It mainly involves defining an appropriate loss function and applying existing results from ONS and MetaGrad. This simplicity can be viewed as a strength (a straightforward method with easy implementation that significantly improves upon existing results) or as a weakness (the lack of novel technical contributions).
Weaknesses:
The paper lacks numerical experiments. It would have been valuable to demonstrate, in practice, the benefits in terms of regret or computational cost using synthetic experiments. In particular, what is the effective cost of the generalized projection step in ONS when solving the optimization problem to sufficient accuracy for the regret guarantees? (It would be helpful to recall this in the paper.) If both algorithms are given similar time budgets (e.g., by controlling the precision with which the optimization problems are solved), would it be possible to compare with the regret bound of [25]?
There remains a gap between the upper and lower bounds: versus . Moreover, constant regret might be achievable in general, although it is likely to be challenging. It would also be worthwhile to include in the comparison table the constant regret bound (albeit exponential in ) from [25]. Is it possible that an exponential dependence on is unavoidable for achieving constant regret?
问题
-
For the online-to-batch conversion, is it possible to obtain the result with high probability?
-
Since MetaGrad simultaneously achieves both and regret bounds, it would be valuable to state this explicitly in your results. The bound may be preferable in regimes where is large.
局限性
Yes
最终评判理由
I went through the reviews and rebuttal and support acceptance. In my view, the central question is whether the limited novelty of the analysis (being a straightforward application of ONS and MetaGrad) should be seen as a strength or a weakness. I believe it is a strength, as it improves previous approaches, that were significantly more complex. I think the authors address well other concerns in their rebuttal.
格式问题
NC
We sincerely thank the reviewer for the positive feedback and insightful questions. Below, we address each point raised.
W1. Numerical experiments
Thank you for the constructive suggestion regarding numerical experiments. Encouraged by your and other reviewers' comments, we conducted preliminary experiments to assess the practical performance of our approach.
In short, we found that cutting-plane-style methods (CP; e.g., Besbes et al. 2021, 2025; Gollapudi et al. 2021) quickly become computationally impractical, even in moderate dimensions. In contrast, our ONS-based method remains tractable and consistently outperforms OGD (Bärmann et al., 2020) in terms of the regret. The results provide empirical evidence that ONS can serve as a strong practical alternative to both CP and OGD. Please see our response to Reviewer xL4W for the details of the experimental setup and results.
Regarding the effective cost of the generalized projection in ONS, as discussed by Mhammedi et al. (2019), the projection can be implemented via singular value decomposition (SVD), which achieves machine epsilon precision in time. On the other hand, cutting-plane-based methods such as that of Gollapudi et al. [25] require an impractically large number of samples to reach comparable approximation accuracy. Thus, under any realistic time budget, ONS would perform significantly better than CP in terms of the regret—an observation supported by our experiments for and .
W2. On the dependence on in the constant regret bound
Whether it is possible to avoid the exponential dependence on in the constant regret bound remains an important open question. This is highlighted in Section 6 of our paper and in Gollapudi et al. [25, Section 4.1]. While prior approaches that achieve logarithmic and regret bounds (Gollapudi et al. 2021; Besbes et al. 2021, 2025) rely on cutting-plane methods, our work introduces a new approach based on Online Newton Step (ONS). The insights from these distinct methodologies would be complementary, and we hope that combining them will lead to progress on this open problem. In this respect, we believe that our ONS-based approach provides a meaningful contribution.
Q1. High-probability online-to-batch conversion
Yes, since it is the application of the standard online-to-batch conversion to the convex suboptimality loss, it can be extended to a high-probability guarantee via a commonly used concentration argument (e.g., Orabona "A Modern Introduction to Online Learning," Section 3.1.2). A caveat is that, as is usual, obtaining a guarantee that holds with probability at least comes with an additive term. This additive factor makes the overall convergence rate , which is a fundamental limitation of the online-to-batch conversion approach to obtaining high-probability guarantees.
Q2. Making both regret bounds explicit
We appreciate the suggestion. We agree that explicitly stating the regret bound alongside the logarithmic one would be helpful. We will incorporate it into the revised version.
We hope that the above responses effectively address the points raised. Please do not hesitate to let us know if any questions remain or if further clarification would be helpful.
I thank the authors for their responses. After reading their rebuttals and other reviews, I keep my score for now and wait for the discussion with other reviewers.
This paper studies the problem of online inverse linear optimization. By applying the Online Newton Step (ONS) algorithm, the authors achieve logarithmic regret in time () with time independent per round complexity. The setting is further extended to account for suboptimal actions, and corresponding lower bounds are established.
优缺点分析
Strengths:
-
Per round complexity of different algorithms are compared and time independent per round complexity has been achieved.
-
Lower bound of regret in terms of dimensional dependence has been proved.
Weaknesses:
-
It seems that there is a tradeoff between per round complexity and regret bound in Table 1. In the case of high dimensions ONS can be costly which can also be observed in term in per round complexity.
-
The reported overall complexity can be somewhat misleading. For example, itself can depend on , in which case the overall complexity may be well more expensive that just .
-
While the paper motivates the results from a practical perspective, there is no numerical experiment to demonstrate the effectiveness of the proposed methods over state-of-the-art.
问题
Is the suboptimal feedback setting considered in the paper related in any way to the framework of online optimization with hints? If so, it would be helpful to clarify the connection and highlight any similarities or differences.
局限性
Adequately discussed.
最终评判理由
The authors adequately addressed my questions and I increased the initial score.
格式问题
None
We sincerely thank the reviewer for their time and effort in reviewing our work. Below, we address each point raised.
W1 & W2. Regarding complexity
We appreciate your comments on the description of complexity. We will revise it accordingly to avoid any confusion.
As discussed in Section 3, the generalized projection step has a typical computational cost of , and we agree that this should have been explicitly stated in Table 1. We will revise the table caption to include this remark.
While ONS’s per-round complexity is indeed higher than that of OGD, it is important to note that existing logarithmic regret methods by Besbes et al. (2021, 2025) and Gollapudi et al. (2021) incur even greater per-round costs that depend on both and —e.g., , as noted in the caption of Table 1. In contrast, our approach eliminates any dependence on and typically keeps the per-round cost at when , where denotes the cost of solving linear optimization over , which is also required by OGD. We thus believe that this complexity should be regarded as a strength of our approach rather than a weakness. In fact, Reviewer 1Hto noted that “This is a significant advancement, as previous logarithmic-regret methods were highly inefficient for large time steps.”
W3. Numerical experiments
We appreciate the reviewer's feedback regarding numerical experiments. In light of your and other reviewers’ comments, we have conducted experiments to complement our theoretical results.
Setup
We use synthetic datasets inspired by the hard instance used in our lower bound analysis (Section 5). Let be a random vector with . The learner’s prediction set is the -dimensional Euclidean unit ball. For each , we sample an endpoint uniformly at random from the unit sphere in and set to the line segment . We generate instances for , , and .
Methods
We compare the following three methods:
- ONS: our online-Newton-step-based method,
- OGD: the online gradient descent method based on Bärmann et al. (2020),
- CP: a cutting-plane-style method inspired by Gollapudi et al. (2021).
For CP, we must compute the centroid of at each round . Since exact centroid computation is #P-hard, and even polynomial-time approximations like hit-and-run sampling are often impractical, we adopt a randomized heuristic: we pre-sample candidate points from the unit ball , eliminate those violating accumulated cuts, and approximate the centroid as the average of the remaining points. Though not theoretically grounded, this method serves as a tractable approximation of the original CP approach.
Results
Below are tables showing the cumulative regret and runtime over rounds.
Cumulative regret
| Method | |||
|---|---|---|---|
| ONS | |||
| OGD | |||
| CP |
Cumulative runtime (s)
| Method | |||
|---|---|---|---|
| ONS | |||
| OGD | |||
| CP |
When the dimension is small (), CP achieves the lowest cumulative regret. However, its cumulative regret deteriorates significantly for and . This degradation stems from the limited number of pre-sampled candidate points: in principle, about samples would be required for an accurate approximation, which is infeasible even for moderate . Moreover, although the above randomized centroid approximation substantially reduces computation by averaging over the surviving candidates, it remains less scalable than OGD or ONS. These results highlight practical limitations of CP in moderate to high-dimensional settings.
In contrast, ONS consistently achieves low regret values across all dimensions while remaining computationally feasible. It outperforms OGD in terms of the regret and scales reasonably well with increasing . Note that our ONS implementation uses a straightforward projection routine that repeatedly solves similar linear systems—an overhead that could be reduced by more sophisticated implementation techniques. Further speedups could also be achieved via quasi-Newton-type updates or sketching-based techniques, as discussed in Sections 3 and 4.
Taken together, these findings affirm that ONS provides a strong and scalable alternative to existing methods in online inverse linear optimization, especially when CP is computationally infeasible and OGD's regret performance is unsatisfactory.
Q. Relation to online learning with hints
Thank you for the interesting question. We considered your suggestion in light of the literature on online learning with hints, such as Dekel et al. (NeurIPS 2017, "Online Learning with a Hint") and Bhaskara et al. (ICML 2020, "Online Learning with Imperfect Hints"). Below, we clarify the similarities and differences.
The "online learning with hints" framework pertains to standard online linear optimization (OLO), where the learner selects an action and then observes a cost vector . Before making a decision, the learner receives a hint vector , which is expected to correlate with . The goal is to minimize the regret with respect to the best fixed action in hindsight.
In contrast, our setting arises from inverse optimization. The learner predicts an objective vector , which induces an action , and then observes the agent’s actual choice . The regret is defined as , where is the agent's true objective vector. Thus, the learner selects , and the feedback is observed after decision-making, in the form of the agent’s action .
Although the two frameworks might appear similar, they differ in fundamental ways. In the hint setting, the learner receives explicit side information before making a decision. In our setting, the feedback arrives after the learner’s prediction, and it only indirectly reflects the unknown objective vector .
That said, one could envision a hybrid setup where the learner receives both the agent's feedback and external side information about . Developing algorithms that leverage such additional information—perhaps to remove the factor in the regret bound—would be an exciting direction for future research. We thank the reviewer for raising this intriguing connection and will include it in the revised version.
We hope the above clarifications help resolve your concerns. Please do not hesitate to let us know if any questions remain or if further clarification would be helpful.
I would like to thank the authors for their responses. I do not have any further questions for the authors, and I will wait for the discussion with other reviewers.
This papers studies the problem online inverse linear optimization, proposing an algorithm with an improved regret bound of with a simple and relatively efficient algorithm. The main idea is the use of the Online Newton Step with a surrogate function proposed by the authors of the algorithm MetaGrad. Furthermore, the authors show that when the learner observes a suboptimal action instead of an optimal one as in the case of the original problem, using MetaGrad with experts leads to a similar regret bound with an extra penalty of , where is the culmulative gap of the optimal payoffs and the payoffs of the observed actions. Finally, the authors also show a lower bound on the regret of any (possibly randomized) algorithm for the problem, and show how one can get regret for .
优缺点分析
The algorithm proposed to the problem is surprisingly simple and yields strong results for the problem. It is quite interesting to see such a simple algorithm given that the previous algorithms for the same (or similar problem) use more intricate algorithms based on cutting plane methods. Ultimately the regret bounds follow from results from MetaGrad (as the authors note, even Thm 3.1 is MetaGrad with only one expert), but this should not be seen as a negative point for this paper, since this application was not obvious at all in hindsight and the authors do shave off some constants in their analysis. The lower bound is also simple, so I feel the results overall are quite elegant.
One problem (although relatively minor) the paper has is with notation. It is quite heavy at some moments, and it can get quite confusing at some points. I know the MetaGrad paper and know that a lot of this notation is from the original MetaGrad paper, but this is not a good excuse to use the same notation in the main paper.
Another minor problem with presentation is that it was not clear to me when first reading the paper whether the constants the big-Oh notation was hiding in sec 1 could be hiding a dependency on (such as the B or D constants later on). Appendix A (which is quite well-written) clarifies this a lot, but it would be nice to say, for example, that the rates in the table in Sec 1 are for all sets being constrained to the unit ball for the sake of uniformity.
问题
Here are a few parts where notation is heavy and maybe a bit confusing:
- In Prop 2.5, do we need ? Or is it ok for them to be arbitrary vectors? Also, having as arbitrary vectors while are iterates was quite confusing. It makes sense for MetaGrad, but at this point it was quite confusing;
- For Theorem 4.1 we need the subgradients used by MetaGrad to be the specific gradients given by Prop 2.4, right? I don't think this is specified anywhere Other minor comments:
- lines 49-50: "(...) about defining the regret." not clear what you meant here
- line 138: "Alternately" here is to mean that first the player places the prediction and only afterwards receives feedback, if I understood it correctly, but I don't think this is the correct way to say that.
- Assumption 2.2: "-diameter" is slightly ambiguous since we don't know if you mean squared norm or not. If you have the space, I would add the mathematical formula to clear any possible confusions;
局限性
The author's do a good job on placing their contributions in the literature and discussing limitations.
最终评判理由
I went throught the other reviews, and I still believe this is a strong submission. I don't agree with the most negative review, and the other slightly negative reviewer had some good points that I felt were properly addressed in the rebuttal. I'll engage in the discussion with reviewers and AC to try to reach consensus.
格式问题
No formatting concerns
We sincerely thank the reviewer for the careful and constructive feedback, as well as for your strong support of our work. We are grateful for the detailed reading and thoughtful suggestions rooted in deep expertise. Below, we respond to the questions:
-
Regarding the domain of in Proposition 2.5, the vector (as well as ) is assumed to belong to . In the main text, is stated in eq. (3).
-
In Theorem 4.1, subgradient corresponds to , consistent with Proposition 2.4. We will clarify this in the revised version.
We also appreciate the minor comments and will incorporate the suggested improvements in the revised version.
Regarding the remark on lines 49–50: we intended to convey that online inverse linear optimization can, in principle, benefit from richer feedback than standard online linear optimization (OLO). In OLO, the cost vectors provide little information about the best action in hindsight, which is the comparator in the regret, restricting the attainable regret rate to . In contrast, in online inverse linear optimization, the observed feedback is optimal for the agent’s true objective vector , which defines the regret in this setting as . Intuitively, this richer information about enables us to achieve logarithmic regret bounds, surpassing the barrier of general OLO.
I would like to thank the authors for their rebuttal to my review and to the other reviewers as well, it helps a lot to see the discussion in other reviews.
About the (minor) points I raised, I think the longer discussion the authors provided about the comments on lines 49-50 does help a lot. In the original write-up I had a hard time interpreting it just by the way it is framed. If space allows, I would encourage incorporating this lengthier version to the paper, since it is a nice discussion when introducing the problem.
I went throught the other reviews, and I still believe this is a strong submission. I don't agree with the most negative review, and the other slightly negative reviewer had some good points that I felt were properly addressed in the rebuttal. I'll engage in the discussion with reviewers and AC to try to reach consensus.
This paper aims to solve the problem of online inverse linear optimization, where a learner tries to deduce a hidden linear objective function of an agent by observing the agent's chosen optimal actions over time. The quality of the learner's predictions is measured by regret, which is the cumulative gap between optimal objective values and those achieved by following the learner's predictions. In the literature, Besbes et al. 2021 and 2025 propose an algorithm that achieves a logarithmic regret bound, while it is not computationally efficient since the number of constraints grows as fast as a polynomial of time length . The authors propose an efficient algorithm that achieves logarithmic regret and has a per-round complexity independent of .
优缺点分析
Strengths:
-
The paper presents the first logarithmic-regret method whose per-round computational complexity is independent of the time horizon . This is a significant advancement, as previous logarithmic-regret methods were highly inefficient for large time steps. Also, the proposed method achieves the best-known logarithmic regret bound of
-
The method is described as "strikingly simple" as it applies the Online Newton Step (ONS) to appropriate exp-concave loss functions. This simplicity is considered a strength, making it more accessible and easier to implement.
-
Corollary 4.2 provides an online-to-batch conversion for offline settings. The comparison with [Bärmann et al., 2020] on offline guarantees notes that their prior work is even when , whereas this paper offers a faster rate, highlighting an improvement.
Weakness:
-
The primary limitation explicitly stated by the authors is that their work is restricted to the case where the agent's underlying optimization problem is linear. Extending this to nonlinear settings is identified as an important direction for future work.
-
While the lower bound of and upper bound of are established, there remains an gap between them that the authors identify as an important open problem.
问题
-
Line 290 - 296, you state that MetaGrad's per-round complexity is and that "If the factor is a bottleneck, we can use more efficient universal algorithms [Mhammedi et al., 2019, Yang et al., 2024] to reduce the number of projections from to ." Could you briefly explain how these more efficient universal algorithms (e.g., [Mhammedi et al., 2019, Yang et al., 2024]) reduce the number of projections to 1 in the context of MetaGrad? And what is the factor or concern that prevents you from choosing those more advanced algorithms in the first place?
-
Line 338 - 339, you mention, "Whether a similar lower bound holds when all are full-dimensional remains an open question." Can you briefly speculate on what specific challenges arise in constructing a similar lower bound proof for the full-dimensional case compared to the line segments used in Theorem 5.1?
-
Line 715 - 727, in Section C.2, you discuss the challenge of extending the regret bound for to due to the area becoming arbitrarily small while does not. Could you elaborate more on the most promising direction you believe to close the gap for ?
局限性
yes
最终评判理由
The author has resolved my questions. I have also reviewed the discussions of the author with other reviewers, and I believe this paper is acceptable so I will keep my score and support for this paper.
格式问题
No obvious formatting concerns were found in the paper.
We sincerely thank the reviewer for the insightful comments and questions. Below, we address each point raised.
Q1. Reducing the number of projections in MetaGrad
The reason we used the standard version of MetaGrad in the complexity discussion is that it is both intuitive and widely known. More efficient implementations in Mhammedi et al. (2019, Section 4) can indeed reduce the computation cost of projections from that of to 1 per round without any sacrifice, which we explain below.
In MetaGrad, each of the -experts performs a generalized projection of the following form, as in Algorithm 1 in Appendix B:
w_{t+1} \leftarrow \arg\min_{w \in \mathcal{W}} \left\\| w_t - \frac{1}{\gamma} A_t^{-1} \nabla q_t(w_t) - w \right\\|_{A_t}^2,where and . Let be a Euclidean ball for simplicity. Then, the projection can be computed via singular value decomposition (SVD) of . Naively doing this for each expert would require SVD computations. However, as observed in Mhammedi et al. (2019, Section 4.1), is the sum of outer products of the same vectors across all -experts, and the only difference among the experts lies in the scalar weighting applied to . This enables us to compute the SVD of once and reuse it across all experts, thereby reducing the projection cost to that of a single projection. Even when is not a Euclidean ball, the approach can be extended by reducing the projection to the Euclidean case based on a technique developed by Cutkosky and Orabona (2018, "Black-box reductions for parameter-free online learning in Banach spaces"), as discussed in Mhammedi et al. (2019, Section 4.2).
Q2. On extending the lower bound to full-dimensional
Thank you for highlighting this subtle and important point. Our current lower bound construction relies on using line segment domains that are axis-aligned. Intuitively, this structure ensures that the observed feedback reveals only a single coordinate of the agent’s true objective vector . For example, if is aligned with the -th coordinate axis, then provides information only about and nothing about the other coordinates. This readily implies that observations are required to infer the full vector , which forms the basis of our lower bound.
In contrast, if is full-dimensional—say, for —the maximizer lies at one of the box’s corners. Observing the sign pattern of in such cases allows the learner to infer the sign of every coordinate of , thereby revealing more information per round than in the line segment case.
Establishing lower bounds under full-dimensional thus involves additional challenges in limiting per-round information gain. For the purposes of this work, using line segment domains is sufficient to derive the lower bound, while keeping the proof technically simple and transparent.
Q3. On the discussion in Appendix C
We are glad that the reviewer found the discussion in Appendix C of interest. This part of our analysis is inspired by cutting-plane-type ideas (Besbes et al., 2021, 2025; Gollapudi et al., 2021), where the learner iteratively eliminates inconsistent regions of the objective vector space. However, as noted, this approach faces technical barriers—such as the gap between angular and area—that make it challenging to extend the analysis to general .
In contrast, our main approach builds on online Newton step (ONS) with tailored exp-concave surrogate losses. While this incurs a factor, the online convex optimization (OCO) literature provides data-dependent refinements for exp-concave losses. For example, Orabona et al. (AISTATS 2012, "Beyond Logarithmic Bounds in Online Learning") introduce the so-called small-loss-dependent analysis, which tightens the regret bound based on the comparator's cumulative loss. Adapting such advanced OCO techniques to online inverse optimization could be a promising future direction. More broadly, we believe that the cutting-plane and ONS-based viewpoints highlight complementary aspects of the problem, and integrating insights from both may ultimately help close the gap.
We hope the above responses effectively address your questions. Please do not hesitate to let us know if any questions remain or if further clarification would be helpful.
I particularly want to thank to author for their detailed feedback, and they have resolved my questions. I also go through other reviewers' feedback and rebuttals. I would like to keep my support for this paper and maintain my score.
This paper addresses the problem of online inverse linear optimization, where the learner is given a sequence of feasible sets and corresponding optimal decisions , and seeks to recover the unknown linear cost vector . The authors propose a learning algorithm based on the Online Newton Step (ONS) method applied to a specially constructed loss function. Assuming this loss is exp-concave, the algorithm achieves a regret bound of , with per-round computational complexity independent of the time horizon . The algorithm is also extended to handle suboptimal actions using a MetaGrad-style ensemble. A lower bound of is provided, establishing the optimality of the regret's dimensional dependence up to logarithmic factors.
优缺点分析
Strengths
- The paper studies a well-motivated and practically relevant problem in online learning and inverse optimization.
- The paper includes a matching lower bound, which strengthens the theoretical framing.
Weaknesses
- The central theoretical guarantee relies entirely on the assumption that the proposed loss function is exp-concave. However, the paper does not prove this property or provide sufficient conditions under which it holds. Since exp-concavity is a strong assumption that does not hold generically, the regret bound is not fully justified.
- Once exp-concavity is assumed, the regret analysis is a standard application of the ONS algorithm. The work does not introduce new techniques or insights beyond this reuse.
- The paper includes no experiments to demonstrate the method’s practical efficiency or its advantage over existing baselines such as constraint-accumulation methods. This significantly limits its potential impact.
- The regret bound hides constants involving quantities such as , which are not analyzed or estimated. This limits the practical interpretability of the results.
- Although the paper focuses on linear objectives, the framing and claims suggest broader applicability. In reality, the approach is tailored to a narrow setting, and the key assumptions do not generalize easily.
问题
- Can you provide a formal proof or structural conditions under which the proposed loss function is exp-concave? Without this, the regret bound in Theorem 3.1 is not well grounded.
- Have you considered implementing your algorithm and comparing its runtime and regret behavior to prior methods (e.g., Besbes et al., which accumulate constraints over time)? Even synthetic experiments could strengthen the paper.
- The regret bounds include constants tied to parameters like . Can you give concrete estimates or examples that help interpret these constants in typical inverse optimization settings?
- This work is clearly focused on linear objectives. Section 6 mentions the potential for kernelization. Can your self-bounding analysis extend to non-linear objectives or convex settings? Is exp-concavity still plausible in those cases?
- The gap between upper and lower bounds remains open. Is this tight, or can the upper bound be improved with further algorithmic refinements?
局限性
While the paper discusses the restriction to linear objectives and the unresolved gap, it does not sufficiently highlight the reliance on the unproven exp-concavity assumption, which is critical to the validity of the main result. This should be addressed explicitly.
最终评判理由
I thank the authors for the detailed responses that addressed my concerns/problems. The exp-concavity of the surrogate loss function is clearly proved and the simplified analysis technique can be seen as a strength. Upon reading through other reviewers' comments, I decided to increase my score while decrease the confidence.
格式问题
no
We sincerely thank the reviewer for their time and effort. Feedback that articulates specific points, like yours, is especially helpful in refining the manuscript. Below, we address the questions and concerns.
Q1. Proof of exp-concavity
The loss function used in Theorem 3.1 is exp-concave under Assumption 2.2. The proof is given in Appendix B.2 (as part of the general proof of Proposition 2.5), and we reproduce it below for clarity.
Specifically, for the -th feedback , prediction , and , the loss function is defined as
where . Its gradient and Hessian are
Therefore, we have
where the inequality uses Assumption 2.2, which ensures . Thus, satisfies for .
This condition is equivalent to -exp-concavity for the twice-differentiable function (e.g., Hazan [26, Lemma 4.2]). Specifically, the Hessian of the function satisfies
which implies that is concave. Therefore, is -exp-concave for .
Novelty given the exp-concavity. While our methods build on ONS and MetaGrad, the way we leverage these tools in the context of online inverse linear optimization is novel and nontrivial. A key technical contribution is the identification of the exp-concave surrogate loss that enables the simple yet effective use of ONS in this setting. Notably, commonly used losses in inverse optimization—such as the suboptimality loss—are convex but not exp-concave, and thus do not support logarithmic regret bounds via ONS. Regarding this point, Reviewer 1Hto noted that “... it applies the Online Newton Step (ONS) to appropriate exp-concave loss functions. This simplicity is considered a strength, ...” More importantly, this viewpoint leads to our second main result: robustness to suboptimal feedback via MetaGrad. Reviewer kNBn remarked this application was “not obvious at all in hindsight,” highlighting the conceptual and technical novelty of our approach.
We hope this clarification resolves the reviewer’s concerns regarding the exp-concavity and the technical novelty of our contributions.
Q2. Experiments
Thank you for raising this point. Encouraged by your question, we conducted numerical experiments to evaluate our approach empirically. In summary, we found that our ONS-based method consistently outperforms OGD (Bärmann et al., 2020) in terms of the regret, while remaining significantly more efficient than cutting-plane-style methods, akin to those proposed by Besbes et al. (2021, 2025) and Gollapudi et al. (2021). These findings suggest that our ONS-based method is a highly effective approach, particularly when the problem dimension is too high for cutting-plane methods to be feasible and when OGD’s regret performance is unsatisfactory. Due to space limitations, we present the details of the experimental setup and results in our response to Reviewer xL4W.
Q3. Regarding constants depending on , , and
Our regret upper bound of depends only mildly on the parameters , , and . The constant factor is interpretable and can be estimated based on the agent’s forward problem specifications. Let us first clarify their definitions:
- is the -diameter of , the learner’s set of predictions.
- is the maximum -diameter of , the agent’s feasible action set.
- is an upper bound on the inner product for any and .
In many cases, the learner is interested only in predicting the direction of the agent’s objective vector , since the observed feedback reveals no information about the scale of . Therefore, we can usually assume that is the unit Euclidean ball, yielding . If is known to be non-negative, we may alternatively assume that is, for example, the probability simplex, as in Bärmann et al. (2017).
The value of depends on the agent’s forward problem. For instance, in the customer preference estimation problem studied by Bärmann et al. (2017, Section 4.1), the action set is defined by a knapsack constraint: and , for some price vector and budget . If is a lower bound on the price vector entries, then the -diameter is at most . In the literature (e.g., Gollapudi et al., 2021; Besbes et al., 2021, 2025), this is often simplified to by normalizing the action sets.
Finally, the constant is upper bounded by by the Cauchy–Schwarz inequality. We introduce to express the regret bounds with sharper dependence on the problem geometry. In practice, can also be interpreted as prior knowledge about the agent’s maximum attainable objective value (up to a constant factor).
Returning to the regret upper bound, given a specification of the agent’s forward problem as discussed above, the constants in the bound can be computed straightforwardly. Roughly speaking, the logarithmic term can be estimated as at most since we typically have . Thus, the dominant contribution from the constant factor is , which can be interpreted as the agent’s maximum attainable objective value multiplied by the dimensionality. Note that this is the optimal dependence on since there is a regret lower bound of , as shown in Theorem 5.1.
Appendix A provides a more detailed discussion on how these parameters influence the regret bounds, including comparisons with prior work.
Q4. Extension to non-linear objectives
Thank you for the question. While our paper focuses on linear objectives, the techniques extend to a broad class of forward problems.
If the agent's objective function is nonlinear—for example, quadratic of the form —we can still apply our framework by defining a suitable feature map. Specifically, we may define
so that the agent’s objective becomes linear in the feature space: with encoding and . As long as the objective is expressed in this linear form in , our surrogate loss, , remains exp-concave, and our approach can be applied; we have discussed a similar extension to the contextual setting of Besbes et al. (2021, 2025) in 562–578 in Appendix A. In this sense, our approach is compatible with a wide class of non-linear objectives. The main limitation of this extension is the increased dimensionality. Designing scalable algorithms that mitigate this issue is an important direction for future work.
Finally, note that although the linear setting may appear restrictive, it captures many practical applications and provides a theoretical foundation for understanding the regret guarantees achievable in inverse optimization. This is why a foundational body of prior work—including Bärmann et al. (2017, 2020), Gollapudi et al. (2021), and Besbes et al. (2021, 2025)—has also focused on linear objectives.
Q5. Regarding the -gap
Whether the factor is avoidable in online inverse optimization is a fundamental open question, as also noted in prior work (e.g., Gollapudi et al., 2021, Section 4.1). Although no existing work has formally resolved this question, we believe that further progress could lead to removing the factor from the regret upper bound. Our work contributes to this broader effort by offering a novel perspective: whereas previous approaches achieving logarithmic regret bounds (Gollapudi et al., 2021; Besbes et al., 2021, 2025) are based on cutting-plane methods, our approach is the first to leverage ONS in this context. We hope that this alternative methodology sheds new light on the structure of the problem and offers a fresh angle from which to approach this important open direction.
We hope that the above responses effectively address your concerns. Please do not hesitate to let us know if any questions remain or if further clarification would be helpful.
I thank the authors for the detailed responses that addressed my concerns/questions. The exp-concavity of the surrogate loss function is clearly proved and the simplified analysis technique can be seen as a strength. Upon reading through other reviewers' comments, I decided to increase my score while decrease the confidence.
This paper studies the online inverse linear optimization problem, and shows that ONS and MetaGrad with suitable surrogate loss functions can achieve the logarithmic regret bounds for this problem. The authors also establish an lower bound to demonstrate the near optimality of their algorithms, where is the dimensionality.
Note that previous studies have already proposed an algorithm to achieve the logarithmic regret bound for this problem. However, this algorithm is a cutting-plane-style method, which is more complex and requires much higher time complexity. Therefore, the main strengths of this paper are the efficient logarithmic-regret algorithms (i.e., ONS and MetaGrad with suitable surrogate loss functions) and the lower bound.
Nonetheless, there are also some weaknesses (or concerns initially raised by some reviewers). First, the applications of ONS and MetaGrad are very standard, as MetaGrad was originally proposed with surrogate loss functions [55, 56], which implies that the technical novelty of this paper may be limited. Second, there still exists a time-dependent gap between the lower bound and the upper bound. Third, the authors do not provide experiments to verify the advantage of the proposed algorithms.
Due to these strengths and weaknesses, this paper initially got scores of 2, 4, 4, 3, and 5 from five reviewers. But after the rebuttal, most concerns of the reviewers have been addressed: i) they reach a consensus that the simplicity of applying ONS and MetaGrad should be regarded as a strength, rather than a weakness; ii) the authors provide some experiments to demonstrate the advantage of the proposed algorithms. Accordingly, their scores became 3, 4, 5, 4, and 5.
I also like the simplicity (and efficiency) of applying ONS and MetaGrad to the online inverse linear optimization problem. Thus, I recommend accepting this paper, and hope that the authors could also revise this paper according to the reviewers' comments, e.g., the concerns about the experiments.