Improved Bounds for Swap Multicalibration and Swap Omniprediction
We prove improved bounds for swap multicalibration, swap omniprediction, and swap agnostic learning in both the distributional and online settings.
摘要
评审与讨论
This paper studies multicalibration and omniprediction problems in distributional and online settings. In the online setting, they show an -swap multicalibration error, resolving an open question posed by Garg et al. The algorithm, which is based on Blum-Mansour reduction, is extended to significantly improve error and sample complexity bounds on several related notions of contextual swap regrets, multicalibration and omniprediction errors.
优缺点分析
Disclaimer: I am not familiar with the topic of the paper.
Assessment:
This paper makes significant contribution in improving a large number of existing error and sample complexity bounds. In most cases, the improvements are orders of magnitude better than existing results (e.g. Theorem 1, Corollary 1 and Theorem 4).
The two new notions on pseudo-swap multicalibration and pseudo contextual swap regret, as well as the reduction from the former to the latter, seem fundamentally important as they ultimately lead to improved -swap multicalibration error.
In addition to novel ideas and improved bounds, it appears that the paper also introduce new analysis techniques as stated in line 134: "By performing a careful martingale analysis using Freedman’s inequality on a geometric partition of the interval [0, 1], we finally establish the desired concentration bound." However, this new martingale analysis is not made clear in the main text.
In terms of weaknesses, I do not find anything significant. For a theoretical paper with many upper bounds, it is beneficial to have a formal argument on the sort of lower bounds one can obtain. However, it is understandable that lower bound proofs might take an entirely different paper.
Questions:
- Can the authors clarify where the "careful martingale analysis using Freedman’s inequality on a geometric partition of the interval [0, 1]" is? I did look at Appendix D.1, but I cannot find any new technical lemma on this. Is this hidden in the proof of Lemma 9? In any cases, could you briefly describe the nature of this new analysis?
- I cannot find the discussion on the computational complexity of your algorithms. In particular, I am concerned whether or not your algorithms have to explicitly construct a cover in Proposition 1.
- How robust are your results with respect to the particular finite discretization of line 76? (This might be a stupid question as I am not familiar with this setting).
- While I suppose the paper is sufficiently clear for expert reviewers who are familiar with swap regret and multicalibration, some parts of it should be improved for better clarity so that even people outside of the domain can understand. For example, I am unable to grasp what in line 154 is. Where does it come from, for example is it something that the algorithm computes in time step ?
- Can you clarify what it means for to be deterministic?
- Similarly, Algorithm 1 is not self-contained as it is written. Are and input to Algorithm 1?
问题
Please see the questions in the "Strengths and Weaknesses" section above.
局限性
Yes
最终评判理由
I keep my original positive score (4) since the authors have adequately addressed my questions.
格式问题
No
We sincerely thank the reviewer for their comments and time in evaluating our paper. Below, we respond to all your questions and comments.
For a theoretical paper with many upper bounds, it is beneficial to have a formal argument on the sort of lower bounds one can obtain. However, it is understandable that lower bound proofs might take an entirely different paper.
Regarding lower bounds, we would like to mention that Appendix A does provide some lower bounds for online -Calibration (particularly, as defined in lines 167-168). In summary, proving lower bounds for online calibration is a significantly challenging problem and obtaining the minimax bound is still open, with recent breakthroughs establishing that (for a sufficiently small ) and (Dagan et al., Fishelson et al.). Based on the above results, we infer the following:
(a) Since contextual learning is harder than the non-contextual variant, and for the squared loss, the non-contextual swap regret is equivalent to , the lower bound for also applies to (contextual swap regret).
(b) Since (swap) multicalibration is stronger than calibration, the same lower bounds also apply to .
(c) For swap omniprediction, Garg et al. proved that for a loss class that comprises of convex Lipschitz functions, against a hypothesis class that consists of the constant functions, we have .
Can the authors clarify where the "careful martingale analysis using Freedman’s inequality on a geometric partition of the interval [0, 1]" is? I did look at Appendix D.1, but I cannot find any new technical lemma on this. Is this hidden in the proof of Lemma 9? In any case, could you briefly describe the nature of this new analysis?
Yes, the analysis is hidden in the proof of Lemma 9. Recall that Freedman's inequality (Lemma 6; Line 604) states that for a martingale difference sequence , where for all , we have with probability at least , where , is a fixed constant, and . When is a deterministic quantity, the upper bound in the display above can be minimized with respect to ; the optimal . However, when is random (which is indeed the case in Lemma 9), we cannot merely substitute in the bound above since is random. To circumvent this difficulty, we partition the feasible interval for geometrically (e.g., when , we partition to intervals of the form ), apply Freedman's inequality within each interval, and subsequently take a union bound. Notably, each interval corresponds to a condition on for which the optimal falls within that interval, and for that interval substituting or leads to only a lower-order blow-up in the quantity compared to substituting .
I cannot find the discussion on the computational complexity of your algorithms. In particular, I am concerned whether or not your algorithms have to explicitly construct a cover in Proposition 1.
No, our algorithm does not construct a cover . We only utilize Proposition 1 in the analysis (particularly Lemma 2) in two ways: (a) the term is due to the bound on the size of the cover; (b) we upper bound for the cover by for the linear class. As can be observed from (b) above, instead of minimizing the pseudo swap multicalibration error over the cover, we instead minimize it over the linear class (since the cover is chosen as a subset of the linear class (ref. Proposition 1)). Regarding the overall computation cost, our algorithm in Section 2.3 operates over the ball and all our subroutines (BM reduction, ONS, Randomized rounding) can be implemented efficiently over the ball.
How robust are your results with respect to the particular finite discretization of line 76?
The question of robustness with regard to the discretization scheme is generally not considered in online calibration since almost always the uniform discretization scheme (which arguably is the simplest discretization) yields the desired result. However, depending on the curvature of the loss function, or the problem itself a non-uniform discretization scheme might be required, e.g., a non-uniform discretization ( for all ) scheme was used in Luo et al (2025) to minimize the swap regret for the log loss.
Luo, H., Senapati, S., & Sharan, V. (2025). Simultaneous swap regret minimization via kl-calibration. arXiv preprint arXiv:2502.16387.
While I suppose the paper is sufficiently clear for expert reviewers who are familiar with swap regret and multicalibration, some parts of it should be improved for better clarity so that even people outside of the domain can understand. For example, I am unable to grasp what in line 154 is. Where does it come from, for example, is it something that the algorithm computes in time step ?
We apologize for the lack of clarity for the more general audience, and this is something that we shall work on for the subsequent revisions. represents a probability distribution over the set . The elements of represent predictions that our algorithm makes. At each time , our algorithm computes and predicts by sampling according to , i.e., .
Can you clarify what it means for to be deterministic?
Recall that represents a probability distribution over the set {}, i.e., , where we have used the notation to represent the simplex over . being deterministic means that the algorithm does not sample , where . Note that this is not necessarily true for an adaptive adversary, since the adversary can include its own randomness and make random, despite of the algorithm not randomizing over ; however, since we consider an oblivious adversary throughout the paper, we can abide by the above interpretation.
Similarly, Algorithm 1 is not self-contained as it is written. Are and input to Algorithm 1?
Algorithm 1 invokes as a subroutine, and is an input to (therefore, Algorithm 1 as well). We apologize if things were not clear and shall work on revising in the subsequent versions.
Should you have any more questions, do not hesitate to reach out. If our responses have addressed your concerns, we would be grateful if you would consider updating your assessment accordingly.
Dear Reviewer,
Since the discussion period is coming to an end soon, we would like to know if you have any further questions, or if our answers have addressed your questions.
Best, Authors
This paper provides an algorithm and improved upper bounds for the problems of (swap) omniprediction and multicalibration in the online and (by an online-to-batch reduction) the offline batch settings. In particular, the authors employ an alternate proof strategy to solve an open problem proposed by Garg et al. on the possibility of achieving -multicalibration error against bounded linear functions. The authors' main technical result is an algorithm for achieving -swap multicalibration in . This bound also implies significant rates for multicalibration, swap omniprediction for convex Lipschitz functions, and, accordingly, improved sample complexity in the offline analogues. The main argument, to my understanding, is as follows:
- The authors introduce two notions pseudo swap multicalibration and pseudo swap contextual regret that avoid dealing with averaging over the Learner's predictions in the online game, and a reduction exists from swap multicalibration to pseudo swap multicalibration (simply via a covering argument on the class of linear functions)
- The authors reduce pseudo swap multicalibration to pseudo contextual swap regret via properties of the linear function class
- main technical contribution: the authors develop an algorithm for pseudo contextual swap regret using a reduction to the classical external to swap regret reduction of Blum and Mansour and an improved rounding scheme. This reduction requires an external regret algorithm to exist, which the authors instantiate as simply Online Newton Step for a particular linear function class.
The upper bound obtained by this argument then implies improved results in online and offline (swap) omniprediction and -multicalibration through improvements of more standard arguments.
优缺点分析
I am quite positive on this paper -- the results are clearly presented with an eye towards giving a nice proof sketch and overview of the entire argument for the reader, the techniques themselves are combined in a novel way, and the authors provide good contextualization with the rest of the literature on omniprediction and multicalibration. The only weakness that the authors might address is to possibly provide a bit more intuition on why these particular techniques result in the improved bounds -- although it seems clear to me that the improvements come from the reduction to the pseudo notions, a comparison to why the pseudo notions combined with the standard Blum/Mansour external to swap regret reduction and improved rounding scheme give a key algorithmic improvement to the prior work of Garg et al. I understand that the rates go down, but it would be very nice to also have the insight on why the pseudo notion is "easier to optimize" in this particular context. A couple sentences on that would improve an already very good paper.
Strengths
Originality: To my knowledge, the authors' techniques in proving the main improved upper bound on -swap multicalibration is novel, and, in particular, the key introduction of pseudo swap multicalibration and pseudo contextual swap regret is novel. In fact, I am interested in seeing if similar notions have cropped up in the literature (see "Questions"), as the move from having to condition using the empirical conditional distribution of the to just dealing with the true distributions at each step certainly simplify the "conditioning" problem that arises when dealing with these multicalibration notions in the online setting, and I'd be interested in seeing if this kind of strategy has been employed before. To my knowledge, this seems like a new move.
Quality: The results of the paper themselves improve bounds for several open questions, and the authors "use the full force" of their main derived bound to fully address the implications downstream for ominprediction and multicalibration by also working out the details for the offline setting, which I appreciated. It answers an existing open question about a specific faster rate for for the specific problem of swap multicalibration in the affirmative, and the new algorithm that results in this upper bound uses the authors' pseudo notions effectively.
Clarity: The paper itself is very well-written and the results are presented in a clear manner, with intuitive proof sketches and a logical flow to the sections. The provided diagram is helpful in understanding the big picture of the argument, and the authors do not waste space on more routine details. I had little trouble following along with the paper (though I am quite familiar with the multicalibration/omniprediction subfield, so others' mileage may vary).
Significance: I believe that the results the authors provide are significant enough to warrant publication and answer an existing open question. The reduction to the pseudo notions also give us insight on a particular difficulty in multicalibration/related problems in regret minimization, particularly their swap variants.
Weaknesses
The weaknesses I see in the work are minor.
The first pertains to motivation: the authors might address is to possibly provide a bit more intuition on why these particular techniques result in the improved bounds -- although it seems clear to me that the improvements come from the reduction to the pseudo notions, a comparison to why the pseudo notions combined with their algorithm nets an improvement.
Another weakness one might argue that the results themselves are too narrow -- that the result for just the class of bounded linear functions doesn't achieve the scope of previous multicalibration results. I don't really see this as a big weakness, as the existing open question asked if a faster rate was possible even for bounded linear functions, and that was not known. Although it seems to me that more general function classes may need different techniques entirely (it is unclear to me how to extend dealing with the "pseudo" notions for more complex function classes), the technical contribution of the authors for bounded linear functions does add knowledge to the subfield, in my opinion.
问题
-
Have these pseudo notions previously appeared in the literature? It seems that the argument hinges around the construction of these notions, and they do point towards a major difficulty in multicalibration-style regret bounds -- the conditioning on the s.
-
Is the bounded linear functional necessary in this argument because of the covering and reduction to (pseudo notions)? The argument seems specialized once we go to pseudo notions because of the ease of constructing a cover. To this note, would more general function classes need new techniques entirely?
-
I'm curious as to the efficiency compared to the previous algorithm of Garg et al? To my understanding, the Blum/Mansour reduction is efficient because it just employs external regret algorithms (where is from the rounding). Even a qualitative comparison of the "efficiency" compared to the previous algorithm would be interesting, just to note the algorithmic differences of the approaches.
局限性
Yes (pure theory paper).
最终评判理由
I keep my original positive score.
格式问题
It seems that the authors attached the optional Appendix to the nine content pages by mistake (but the checklist is still there).
We sincerely thank the reviewer for their comments and time in evaluating our paper. Below, we provide responses to all your questions and comments.
Have these pseudo notions previously appeared in the literature? It seems that the argument hinges around the construction of these notions, and they do point towards a major difficulty in multicalibration-style regret bounds -- the conditioning on the 's.
To the best of our knowledge, the work by Fishelson et al. [8] was the first to introduce the notion of pseudo swap regret (referred to in their paper as full swap regret) and pseudo calibration. Their motivation was to minimize the pseudo swap regret for a broad class of loss functions (e.g., strongly convex, Lipschitz) in the high-dimensional setting. Following up on their work, Luo et al. [7] considered online calibration of binary outcomes and proposed a new calibration measure called KL-Calibration (along with an accompanying notion pseudo KL-Calibration), which they showed is equivalent to the swap regret (resp. pseudo swap regret) of the log loss. As mentioned in the manuscript, our introduction of pseudo contextual swap regret and pseudo swap multicalibration is motivated from the above works.
Moreover, pseudo notions have previously appeared in the literature in several other contexts. For example, a recent line of work on UCalibration (Kleinberg et al. [24], Luo et al. [25]) also considered (pseudo) U-Calibration, where the U-Calibration error () for a family of loss functions is defined as , where represents the external regret incurred by the forecaster for a loss function , where as the pseudo U-Calibration error is defined as . A similar notion of pseudo regret has also previously appeared in the literature on multi-armed bandits (ref. the book by Bubeck and Cesa-Bianchi).
Bubeck, S., & Cesa-Bianchi, N. (2012). Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends® in Machine Learning, 5(1), 1-122.
Is the bounded linear functional necessary in this argument because of the covering and reduction to (pseudo notions)? The argument seems specialized once we go to pseudo notions because of the ease of constructing a cover. To this note, would more general function classes need new techniques entirely?
As you pointed out, two crucial steps that invoke specific properties of the linear class are: (a) the existence of a cover , which allows us to bound for the linear class in terms of for the cover (in Lemma 2, we relate the latter to the pseudo swap multicalibration error for the cover via the concentration bound in Lemma 1; the term in Lemma 1 is needed to obtain style bounds); (b) an explicit algorithm (ONS) that minimizes the scaled external regret against the linear class. For an arbitrary which is closed under affine transformations (we require the closure property for the reduction from (pseudo) swap multicalibration to (pseudo) contextual swap regret (Lemma 3)), we suspect that Rademacher complexity arguments for obtaining fast rates for online learning of the squared loss can be generalized for minimizing the scaled external regret in the display above. However, for an arbitrary which does not admit a finite cover, we are not aware of any techniques to generalize and prove an equivalent of Lemma 2. While it is certainly possible to derive an equivalent of Lemma 1 for an arbitrary using results from the theory of empirical processes (in particular, following the chaining technique in Block et al., we bound the deviation of a martingale difference sequence ), we do not hope to retrieve the term appearing in Lemma 1 since the proof of Lemma 1 heavily relies on controlling the variance of the associated martingale difference sequence via Freedman's inequality. We are not familiar with whether similar results that bound the expected deviation of the supremum of a martingale difference sequence with the variance exist.
In summary, following our framework, generalizing the bound for swap multicalibration for the linear class to an arbitrary seems quite convoluted and requires significant heavy lifting.
Block, A., Dagan, Y., & Rakhlin, A. (2021, July). Majorizing measures, sequential complexities, and online learning. In Conference on Learning Theory (pp. 587-590). PMLR.
I'm curious as to the efficiency compared to the previous algorithm of Garg et al? To my understanding, the Blum/Mansour reduction is efficient because it just employs external regret algorithms (where is from the rounding). Even a qualitative comparison of the "efficiency" compared to the previous algorithm would be interesting, just to note the algorithmic differences of the approaches.}
For minimizing the contextual swap regret of the squared loss against the linear class, the algorithm of Garg et al. invokes the reduction from swap regret to external regret as proposed by Ito [10] and Azoury and Warmuth's algorithm for minimizing external regret against the linear class. In contrast, for minimizing pseudo contextual swap regret, we utilize the Blum-Mansour reduction and the ONS algorithm for minimizing the scaled external regret against the linear class. Qualitatively, both the Blum-Mansour reduction and the reduction by Ito require fixed-point computations, and roughly speaking, the AW algorithm requires similar computation as ONS, thereby rendering our computation cost similar to that of Garg et al. Notably, both our algorithm and that of Garg et al. are efficient due to the efficiency of the subroutines that they invoke.
However, Garg et al. achieve -swap multicalibration error, whereas we achieve -swap multicalibration error (we also achieve a superior dependence on the dimensionality ), which is orders of magnitude better. In a restricted setting ( is finite), Garg et al. also proposed an algorithm that achieves -multicalibration error. However, the above algorithm is inefficient since it has running time proportional to .
Azoury, K. S., & Warmuth, M. K. (2001). Relative loss bounds for on-line density estimation with the exponential family of distributions. Machine learning, 43(3), 211-246.
We hope that our responses answer your questions. Should you have any more questions, please feel free to reach out.
Thank you to the authors for providing these comprehensive answers to my questions! I have decided to keep my score.
This paper derives improved online rates and sample complexity rates for swap omniprediction.
Swap omniprediction is a strengthening of omniprediction (learn a predictor that is close enough to Bayes optimal that it can be post-processed to be near-optimal for any loss) that was introduced by GKR24, who motivated the concept by noting that while multicalibration > omniprediction, swap multicalibration = swap omniprediction.
The authors obtain these improved rates by instead first deriving faster rates for a 'pseudo' variant of swap multicalibration that allows them to reduce to a more tractable problem of pseudo contextual swap regret.
The main proofs are for the online setting, with the distributional sample complexity rates following from an online-to-batch reduction. The main intuition of the paper (which is nicely summarized in Figure 1) is that instead of obtaining swap multicalibration rates by reducing to swap regret minimization, the authors instead reduce to pseudo swap regret minimization. One way to understand what this detour through pseudo calibration measures is doing is that its effectively separating the analysis of the variance caused by the forecaster's random forecasts with the actual learning component.
The interesting thing (at least to me, who has not followed the recent work here closely) is that reducing from swap multicalibration to pseudo-swap multicalibration costs such little overhead: . This is done in Lemma 1 which shows SMCal_{ F, 2} leq O(N log (|F| N / delta)) + 6 PSMCal_{ F,2} with the overhead term being negligible for the types of discreitizations we normally consider, e.g. N=T^1/3. The analysis is an adaptation of Freedman's inequality, which is currently not really talked about in the main text.
The pseudo-multicalibration rates are obtained by reducing to pseudo contextual swap regret. This happens in Lemma 3 (through lemma 4). Specifically: if there is pseudo-multicalibration error, then that error can be captured by a linear competitor meaning that there must be some pseudo contextual swap regret. So, pseudo swap contextual regret for the affine closure of functions in F always upper bounds pseudo multicalibration for F. The other step of the reduction, going from swap multicalibration to swap omniprediction is standard.
The other chunk of the paper is actually obtaining good rates for pseudo swap regret minimization. The algorithm follows the usual Blum-Mansour algorithm for minimizing swap regret where you instantiate multiple algorithms, one for each action, and then act according to a stationary distribution over the algorithms. In this case, we use the BM algorithm with actions corresponding to discretized level sets and online newton step for external regret minimizers (because squared loss is exp-concave). The algorithm design and analysis follow a pretty standard recipe, with the main interesting component being the observation that relaxing to "pseudo" swap regret means that instead of the BM reduction getting noisy feedback it gets deterministic feedback, and so one doesn't need to add in an error term for stochastic approximation.
优缺点分析
The paper is well-written with concrete improvements in theoretical results (improved rates for swap omniprediction) and a clean conceptual contribution: when trying to get rates for swap calibration/omniprediction/agnostic-learning, it's often easier to mentally separate out how one analyzes the stochastic approximation errors from forecast randomness—reducing to pseudo calibration and pseudo swap regret is a nice way to think about how to do this.
I don't have any particular concerns with weaknesses in presentation or results. I did not check the proofs, which are deferred entirely to the appendix, carefully though the results and steps seem reasonable.
问题
I'm not very familiar with the swap omniprediction literature; is there a fundamental eps^-3-eps^-2 separation for swap vs normal omniprediction that's known?
Prior offline swap omniprediction rates cited in Table 1 are attributed to [2, 13] rather than [5]: is there a challenge with online-to-batch adaptation of the rates of [1]?
Why do these detours into pseudo calibration measures arise with "swap" objectives whereas e.g. [6]'s optimal rates for omniprediction do not appear to need them?
局限性
Yes
最终评判理由
I maintain my original positive review.
格式问题
None
We sincerely thank the reviewer for their comments and time in evaluating our paper. Below, we provide responses to all your questions.
I'm not very familiar with the swap omniprediction literature; is there a fundamental separation for swap vs normal omniprediction that's known?
We are not aware of such a separation in the offline setting. However, in the online setting, Garg et al. [1] established that for a loss class of convex Lipschitz functions, against a hypothesis class that comprises the constant functions, we have , where the lower bound is due to a breakthrough result by Dagan et al. [19], Qiao and Valiant [18]. The lower bound on swap omniprediction also implies that an online-to-batch conversion for swap omniprediction cannot achieve sample complexity, in general. For omniprediction, Garg et al. [1], Okoroafor et al. [6] established that omniprediction error is achievable, therefore establishing a strict separation between the above notions in the online setting.
Prior offline swap omniprediction rates cited in Table 1 are attributed to [2, 13] rather than [5]: is there a challenge with online-to-batch adaptation of the rates of [1]?
[5] established an equivalence between swap multicalibration and swap omniprediction, and also showed that the multicalibration algorithm of [2] is swap multicalibrated; therefore, we attributed the bound to [2], which had an explicit sample complexity bound. Moreover, a characterization of swap multicalibration in terms of swap agnostic learning was concurrently obtained by [13], who also derived an explicit sample complexity for swap agnostic learning; therefore, we chose to refer to [13]. However, we accept your suggestion and shall additionally refer to [5] for the result regarding swap omniprediction. For the second part of your question, a standard online-to-batch adaptation (using Azuma-Hoeffding's inequality) of the rates of [1] shall yield a sample complexity for swap omniprediction for the class of convex Lipschitz functions against the class of scaled and shifted linear functions. Throughout the paper, we abstain from referring to this result since it was not mentioned in [1]; we instead tabulate the rates that were explicitly documented in the literature.
Why do these detours into pseudo calibration measures arise with "swap" objectives whereas e.g. [6]'s optimal rates for omniprediction do not appear to need them?
Excellent question. The relationship between swap regret and calibration has been known since long, starting with the seminal work of Foster and Vohra [9]. It is known from several papers that -Calibration () is equivalent to the swap regret of the squared loss. While this opens up a natural direction to developing algorithms for calibration, it also poses an immediate challenge --- minimizing swap regret is considerably challenging than minimizing external regret. Along the direction of minimizing swap regret, the well-known Blum-Mansour [11] reduction works for the non-stochastic component of learning, i.e., it bounds what we refer in the paper to as pseudo swap regret, which is related to pseudo calibration. While there exist other reductions, e.g., Ito et al., Dagan et al., that reduce swap regret minimization to external regret minimization, they either lead to suboptimal dependence on (Dagan et al.) or involve further randomization in the algorithm (Ito et al.), rendering their adaptation to minimizing the swap regret of the squared loss quite non-trivial. In summary, the relationship between calibration and swap regret, and the fact that minimizing pseudo swap regret (thus, pseudo calibration) is easier than swap regret, is a possible explanation of the detour.
Ito, S. (2020). A tight lower bound and efficient reduction for swap regret. Advances in Neural Information Processing Systems, 33, 18550-18559.
Dagan, Y., Daskalakis, C., Fishelson, M., & Golowich, N. (2024, June). From external to swap regret 2.0: An efficient reduction for large action spaces. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing (pp. 1216-1222).
We hope that our responses answer your questions. Should you have any more questions, please feel free to reach out.
Dear Reviewer,
Since the discussion period is coming to an end soon, we would like to know if you have any further questions, or if our answers have addressed your questions.
Best, Authors
Thanks for the response–I'll maintain my original positive review
This paper gives novel bounds on the multicalibration and ominprediction errors in the both the adversarial (online) and distributional settings. The proven error bounds in the adversarial setting are as follows:
- swap multi-calibration regret: (improved from )
- swap omniprediction regret: (improved from ) Similar improvements have also been established in the distributional setting, with an online-to-batch conversion. The key idea is to connect to a particular notion of a pseudo-regret (where some of the randomness is removed) via a martingale concentration argument.
优缺点分析
Strengths:
- This paper gives significantly improved bounds across the board over prior works on swap multi-calibration and, via established reductions, on swap omniprediction. Both of these are previously studied problems in the literature.
- While the techniques and individual steps appear to be known or inspired from very similar ones, the improved bounds are obtained by carefully putting together various pieces.
- The paper seems well-polished and I did not see any typos.
Weaknesses:
I found the paper nearly impossible to parse. As somebody who is familiar with learning theory but never previously heard about multi-calibration or omniprediction (or swap contextual regret, or similar terms), I expected to be able to at least understand the definitions and the significance of the results. While the paper reads like something that researchers knowledgeable in this (apparently niche) area would appreciate, I did not even understand the basic definitions. Further, it tries to pack a lot of math into a small number of pages, making the text incredibly dense as well.
I especially did not understand the definition of multi-calibration. While this is an established notion and has a prior literature, I believe the onus is on the authors to provide enough intuition about the definitions and significance of the results (at least in the appendix). Similarly, I do not understand where the "swap" versions come from or why they are interesting.
It would have been nice to see more special cases/connections as to why these bounds are interesting, such as connections to textbook examples that a reader such as myself would appreciate. For example, the introduction mentions connections to fairness: that would be a good setting to expand upon.
问题
- What does the notation mean? Is this the set of functions ?
- The connections between multi-calibration and fairness are unclear. It would be good to dedicate some space to that aspect to appeal to a broader audience.
- Are any lower bounds known for any of these problems? What about simplifications to known textbook-type problems?
- I'm unable to parse and the meaning of the definition in eq. (1). This needs some interpretation.m\
局限性
N/A
最终评判理由
I'm updating my score with the understanding that the authors will revise the paper to make it more broadly accessible (with interpretations, examples, etc.) and less mathematically dense.
格式问题
N/A
We sincerely thank the reviewer for their comments and time in evaluating our paper. We are sorry to hear that the reviewer found our paper difficult to parse. We apologize for the same and shall revise the paper to make it appealing to the broader audience, keeping in mind several suggestions given by the reviewer. Below, we respond to all your questions and comments, along with an (additional) explanation of multicalibration, since the reviewer was unable to understand the concrete definition.
What does the notation mean? Is this the set of functions ?
Yes, this is a standard notation used in set theory.
Are any lower bounds known for any of these problems? What about simplifications to known textbook-type problems?
Regarding lower bounds, we would like to mention that Appendix A does provide some lower bounds for online -Calibration (particularly, as defined in lines 167-168). In summary, proving lower bounds for online calibration is a significantly challenging problem and obtaining the minimax bound is still open, with recent breakthroughs establishing that (for a sufficiently small ) and (Qiao and Valiant, Dagan et al., Fishelson et al.). Based on the above results, we infer the following:
(a) Since contextual learning is harder than the non-contextual variant, and for the squared loss, the non-contextual swap regret is equivalent to , the lower bound for also applies to (contextual swap regret).
(b) Since (swap) multicalibration is stronger than calibration, the same lower bounds also apply to .
(c) For swap omniprediction, Garg et al. proved that for a loss class that comprises of convex Lipschitz functions, against a hypothesis class that consists of the constant functions, .
Regarding simplifications to known textbook-type problems, we would like to mention that some reductions do exist; however, the reductions and the reduced problems are far from deserving a textbook treatment, e.g., a lower bound for was proved by Qiao and Valiant by reducing to the sign preservation game and an improved lower bound was proved by Dagan et al. by reducing to the sign preservation game with reuse.
I'm unable to parse and the meaning of the definition in eq.(1). This needs some interpretation.
represents the average correlation between and the residual error sequence conditioned on the rounds that the forecaster predicts . Noting the requirement for multicalibration in the distributional setting, i.e., for all , the quantity is an analogue to , adapted to the online setting. Similarly, the normalized term in Equation (1) (normalization skipped in the manuscript) corresponds to the average frequency of the prediction and is an analogue to the quantity . With this intuition, the quantity corresponds to the maximum correlation (weighted by the number of rounds the prediction made is ) achieved by an for the prediction . The cumulative sum across all predictions corresponds to the quantity (we allow higher order -th moments for to consider the full generality). Intuitively, a small value for represents that no hypothesis in is able to detect correlation with the residual sequence when conditioned on the level sets.
The connections between multi-calibration and fairness are unclear. It would be good to dedicate some space to that aspect to appeal to a broader audience.
Thank you for pointing this out. We shall clarify this in the subsequent revisions. Intuitively, since multicalibration is defined at the scale of partitions of the instance space (the predictions are conditionally unbiased for the every set in a collection of subsets of the input space ), it allows fairness guarantees for every subset of simultaneously, e.g., could represent the population of a province, the set can be several possibly overlapping demographic groups within that province, and can be the probability that an individual is affected by a particular disease. On the contrary, calibration is only a marginal requirement and cannot address such finer fairness concerns. For further discussion regarding the fairness implications of multicalibration, we refer to Obermeyer et al.
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453.
I especially did not understand the definition of multi-calibration.
We begin with defining calibration in the distributional setting. Consider a binary classification problem with instance space , label space be {}, be an unknown distribution over , and be the Bayes optimal predictor. Let be a predictor and denote the range of (set of all values that predicts). The predictor is perfectly calibrated if for all , we have , i.e., the predictions of are correct conditioned on its level sets. With this background, multicalibration can be interpreted as a refinement of calibration to partitions of the instance space . Particularly, given a collection of potentially overlapping subsets of , a predictor is perfectly multicalibrated if for all subsets and , we have . As can be noted from the definition, the Bayes optimal predictor is multicalibrated. For both computational and information-theoretic reasons, obtaining the Bayes optimal predictor is considered impossible; therefore, the problem of multicalibration asks to efficiently find a predictor such that for all . Note that the above definition of multicalibration can be made more general. Specifically, since each subset of can be described via a function {} ( if and otherwise), the set can be equivalently described by a collection of set-membership functions. The above definition can be equivalently written as . For an arbitrary real-valued hypothesis class , multicalibration requires for all . Since the quantity corresponds to the correlation between and the residual , multicalibration for an arbitrary real-valued hypothesis class can be interpreted as the condition that the hypotheses in cannot find any correlation with the residual sequence when conditioned on the level sets of .
Should you have any more questions, do not hesitate to reach out. If our responses have addressed your concerns, we would be grateful if you would consider updating your assessment accordingly.
Thank you for the detailed response. I'll try to get another reading of in and get back in case of further questions.
Dear Reviewer,
Since the discussion period is coming to an end soon, and we haven’t heard back from you following up on your response, we would like to know if you have any further questions, or if our responses have answered your questions.
Best, Authors
No further questions. I have revised my score. Thank you!
This paper provides an algorithm and improved upper bounds for the problems of (swap) omniprediction and multicalibration in the online and (by an online-to-batch reduction) the offline batch settings. It has made several important theoretical contributions, including resolving an open problem in SODA and unifying calibration, swap regret, and omniprediction under a pseudo framework. Thus, I would recommend its acceptance with a spotlight.