Achieving Constant Regret in Linear Markov Decision Processes
摘要
评审与讨论
The paper studies constant regret guarantees for linear MDPs. The main result is the first algorithm with this guarantee. The algorithm is a modification of LSVI-UCB with a careful quantization technique and some form of action elimination. This improves upon the regret of previous works by a factor of where is the number of episodes. Unlike previous works, the algorithm is robust to some degree of model mispecification and does not need to know it in advance.
优点
- The method is robust to some model mispecification and does not need to know it in advance.
- The regret guarantee is tighter compared to previous works.
- The algorithm is computationally efficient in the sense that it runs in polynomial time in the problem parameters.
- There are significant technical challenges in the work. In particular the careful quantization required to avoid dependence on and the refined analysis of the regret bound by the suboptimality gaps.
缺点
- While it's challenging to achieve regret that is independent of , I do not see this as a significant improvement over the logarithmic dependence. This is especially true since it is not clear whether the correct polynomial dependence on the remaining parameters has been established. Moreover, this improvement is only relevant in the regime of a fixed success probability , which is often dependent on , e.g., if bounding the expected regret.
- The improvement to the regret comes at a significant increase to the algorithm's complexity. It's not clear whether this is necessary. For example, are you certain that the quantization cannot be replaced by a more careful analysis of the value cover? Why do you need both bonuses and action elimination and sample rejection? The runtime complexity also seems higher (though this is a minor concern). If you think these modifications are indeed necessary, perhaps you could demonstrate this with experiments.
- While I am not familiar with better results, the dependence on the worst case gap weakens the result. Do you think it is possible to achieve a better dependence? e.g. on the average inverse gap with respect to the optimal policy.
- The writing is not very consistent with some places having phrasing and grammatical issues (e.g., quantification should be quantization and many more).
- The algorithm needs to be better explained. While it seems the authors made significant efforts in this regard, many parts of the algorithm remain rather unclear. For example, the bonuses are not immediately apparent. As I understand it, you use a quantization of the standard bonus. It would be much easier for the reader if this is made explicit in the algorithm and explained. Other examples are the action elimination and sample rejection whose purpose is unclear.
问题
See above.
局限性
N/a
Thank you for your detailed feedback, and we address your questions as follows:
Q1. Why is it significant to improve the logarithmic dependence on ?
A1. We would like to first highlight that achieving constant regret is actually an important topic in bandits (Abbasi et al., Papini et al., 2021a, Zhang et al.,) and RL (Papini et al., 2021b). This is not merely shaving off a factor but a stronger performance guarantee than logarithmic regret. Specifically, the constant regret bound indicates that the agent incurs only a finite number of mistakes when making decisions over an infinite run, whereas the number of mistakes can become arbitrarily large if the regret guarantee is only logarithmic.
Q2. Is the correct polynomial dependence on the remaining parameters established?
A2. We would like to point out that our algorithm suggests a polynomial dependence on the parameters , which is consistent with previous literature (He et al., Papini et al.), as discussed in Table 1. Therefore, we are confident that the polynomial dependence on these parameters is correct.
Q3. What is the necessity of each component in the algorithm?
A3. We believe both the certified estimator and the layer-dependent quantization are necessary for our theoretical analysis. As discussed in Section 6, certified estimators build the connection between different layers so that we can avoid taking the union bound over all layers during the concentration. Layer-dependent quantization removes the additional dependence introduced during the covering of the value function in Vial et al..
In addition, we have also conducted experiments on a synthetic dataset as presented in the response to all reviewers. In particular, the certified estimator helps stabilize the estimation so that the algorithm can achieve lower regret when the misspecification grows larger. It is true that the quantization does not significantly contribute to improving the regret, which aligns with our intuition since the numerical results are intrinsically discrete and quantized. In addition, experimental results also suggest that neither the certified estimator nor the quantization decreases the efficiency of the algorithm.
Q4. Why is the analysis based on the minimal sub-optimality gap instead of other gaps (like averaged gaps)?
A4. The averaged sub-optimality gap, as you suggested, can only be applied to the tabular bandits setting, to the best of our knowledge. In reinforcement learning with function approximation, the number of actions can be arbitrarily large, making it difficult to define the ‘averaged’ sub-optimal gap in this setting.
Q5. Explain the details of the algorithms? Why do you need both bonus and arm elimination?
A5. First, we would like to clarify that our algorithm does not use the reject sampling, instead, we call the exit condition in Line 11 in Algorithm 2 certified estimator to ensure that the optimistic estimator for the current layer should be always greater than the pessimistic estimator from the previous layer , and vice versa. Therefore, in case the concentration of layer fails, it will be detected and the performance of the estimation is not affected since the certified estimator (line 11) is triggered and the algorithm returns the estimation from the previous layer. There is no connection between the certified estimator and reject sampling since neither of the are sampled from some distribution and our algorithm is a learning algorithm instead of a sampling algorithm.
Second, regarding the bonus and the arm elimination, we quantize the parameter and in Line 8, Algorithm 1, in order to control the quantity of the function set. Both of the bonus term and functions are calculated using this quantized and . In Algorithm 2, for each layer , the action with a bonus term larger than will directly get selected for exploration.
Third, Regarding the arm elimination process, for each layer in Algorithm 2, we eliminate all actions which is smaller than the estimated optimal value function by . As a result, arm elimination shrinks the arm set before proceeding to the next phase . Since the algorithm pursues higher decision in the deeper layer, which is more vulnerable to model misspecification, the arm elimination tries to remove the suboptimal arm in the shallow layers in order to deliver a more robust decision.
Additionally, in response to your question about the simultaneous use of bonus and arm elimination, it's important to note that this combination is commonly employed in bandits (Chu et al., Takemura et al.) and reinforcement learning (Vial et al.). Therefore, our design choice is conventional and aligns with standard strategies in the field.
- Abbasi-Yadkori et al. "Improved algorithms for linear stochastic bandits." NeurIPS, 2011
- Papini, Matteo, et al. "Leveraging good representations in linear contextual bandits." ICML, 2021a
- Papini, Matteo, et al. "Reinforcement learning in linear mdps: Constant regret and representation selection." NeurIPS, 2021b
- Zhang, Weitong, et al. "On the interplay between misspecification and sub-optimality gap in linear contextual bandits." ICML, 2023.
- Vial, Daniel, et al. "Improved algorithms for misspecified linear markov decision processes." AISTATS, 2022
- Chu, Wei, et al. "Contextual bandits with linear payoff functions." AISTATS, 2011
- Takemura, Kei, et al. "A parameter-free algorithm for misspecified linear contextual bandits." AISTATS, 2021
Thank you for the detailed response.
[A1]: I am fully aware that "shaving" this log factor achieves independence of the time horizon. However, this is only with a fixed probability . When considering an infinite time horizon, it seems unlikely that would not scale with the horizon as this means we can incur linear regret with constant probability.
[A2]: The fact that previous works achieve the same dependence on does not imply that this is the correct dependence. Many works indeed focus on achieving improved dependence on as long as other parameters scale polynomially. However, I find this sensible only when the dependence on is polynomial. When it is logarithmic, I find it very likely that shaving a factor of or would be more significant than shaving a factor.
[A3+5]: Thank you for the explanations. While I'm not fully convinced of the necessity of each part, I get the general picture. Regardless, this is a secondary concern.
[A4]: Regarding gaps, is there a lower bound showing this is impossible in general? I am aware that this concern is not unique to this work but it is a weakness nonetheless.
Thank you for your reply. We addressed your questions and concerns as follows.
Q1. When considering an infinite time horizon, it seems unlikely that it would not scale with the horizon as this means we can incur linear regret with constant probability.
A1. We would like to emphasize that the constant failure probability (e.g., ) is acceptable in practice since it is highly unlikely to enter the “bad events” in a realistic RL task. Moreover, the high-probability bounds are well studied and acknowledged in literature (Jin et al., 2020) and cannot be viewed as some “intermediate” result towards the expected result. Therefore, we believe our result is a direct improvement of these works (Jin et al., 2020, Vial et al., 2022) to achieve the constant regret in this regime.
Secondly, we would like to highlight that the constant regret bound is important because it reveals a PAC-type (w.p. , gap , ) guarantee of the regret distribution, which cannot be obtained from expected regret.
Q2. The correctness of the work.
A2. We would like to first acknowledge that it is indeed important to improve on factors and , where we mentioned a few potential directions in the future work. However, we would like to point out that achieving a constant regret is as important as shaving off other dependence because the factors and are independent from each other. For the case that is small and the is large, removing is significant. We would also like to emphasize that the future potentials for shaving off factors does not imply our algorithm is incorrect because these results are all upper bounds. We have also double checked the proof before and during the rebuttal, and we are confident that our analysis is correct.
Q4. Is there a lower bound showing this is impossible?
A4. In tabular setting, Simchowitz et al. 2019, Xu et al., 2021 provided the gap dependent upper bound by (Theorem 1.1, Xu et al., 2021)
where is the set of state-action pairs (s, a)’s satisfying a is an non-unique optimal action for s. The upper bound (1) contains both the averaged inverse gap and the minimal gap \Delta_\min. Only when the optimal action for each state is unique (i.e., ), (1) can be reduced to
In the general case where the optimal action is not necessary unique, we would like to point out that Xu et al., 2021 proved a lower bound (Theorem 1.3) suggesting no algorithm ALG can achieve a regret such that for all MDPs and approaches infinity,
As discussed in Xu et al,. 2021, this suggests that it is not possible to achieve a regret bound that solely depends on the sum of the inverse of the gaps, even in the tabular MDP setting. Xu et al,. 2021 also mentioned that this is a separation between RL and contextual bandits since such a ‘averaged inverse gap’ can be achieved in contextual bandits (Bubeck et al, 2012, Lattimore et al., 2020)
Therefore, in linear MDPs, such a lower bound presented as (3) also suggests that it is impossible to achieve a gap-dependent bound that is solely depend on the average inverse gap since any tabular MDP can be translated into a linear MDP (Example 2.1, Jin et al. 2020).
In short, given the results from literature, we believe that getting such a bound solely depends on the average inverse gap without other dependencies like minimal sub-optimal gap \Delta_\min or is impossible in linear MDPs.
- Vial, Daniel, et al. "Improved algorithms for misspecified linear markov decision processes." AISTATS, 2022
- Jin, Chi, et al. "Provably efficient reinforcement learning with linear function approximation." COLT, 2020
- Xu, Haike et al. "Fine-grained gap-dependent bounds for tabular mdps via adaptive multi-step bootstrap." COLT, 2021
- Simchowitz, Max et al. "Non-asymptotic gap-dependent regret bounds for tabular mdps." NeurIPS, 2019
- Bubeck, Sébastien et al. "Regret analysis of stochastic and nonstochastic multi-armed bandit problems." Foundations and Trends® in Machine Learning 5.1 (2012): 1-122.
- Lattimore, Tor et al. Bandit algorithms. Cambridge University Press, 2020
This paper proposed a constant regret learning algorithm for linear MDPs with approximation error (misspecification). Specifically, it proved an instance dependent regret that is independent of the number of episodes. In addition, the algorithm does not require prior knowledge of misspecification level or suboptimality gap.
优点
- The background of the problem is stated clearly with well summarized review of the literature.
- There is a comprehensive comparison of the new constant regret results with existing works.
缺点
- The proposed algorithm and analysis follow the algorithm in Vial et al. (2022) closely. The main differences seem to be the layer-dependent quantification in Algorithm 1 and the exit condition 3 in Algorithm 2. It would be better to discuss more clearly how these two modifications contribute to getting rid of the log(K) dependence.
- Some of notations are used without explanation. For example, in algorithm 1, in Algorithm 2 and Theorem 5.1. Also, how the certified estimator is defined in Algorithm 2 is not clear.
- Some analysis and results require more clarification: (a) The condition used in Theorem 5.1 does not match the one used in Lemma C12. (b) It is not very clear how the key results in getting rid of log(k) (Lemma C14 line 765) is derived using .
问题
- The new algorithm does not require UniSOFT assumption made by Papini et al. (2021a). However, in Thm 5 of that paper, they showed that UniSOFT is a necessary condition for constant regret. Can you explain how this does not apply to the current setup?
- It is not very intuitive to understand the constant regret with no dependency on the number of episodes. Are there any simple examples that can run the algorithm empirically to verify this behavior?
局限性
There is no potential negative societal impact.
The authors addressed the limitations in Section 7 such as the tightness of the bound and uniform misspecification assumption.
Thank you for your positive feedback. We address your questions as follows:
Q1. How do the layer-dependent quantization and certified estimator contribute to getting rid of the dependence?
A1. We would like to highlight the contribution of the layer-dependent quantization and certified estimator in eliminating the dependence. Vial et al., 2022 applied a global quantization , making the number of quantized parameters and in each layer depend on . Our method uses unique quantization parameters per layer, thus eliminating the dependence in regret bound (Lemma D.3).
In addition to the term introduced by the quantization, the estimated value function of Algorithm 2 comes from a total of individual regressions. If controlled trivially, the union bound over all layers would also introduce the dependency to the final regret bound. The certified estimator further helps by connecting estimations across layers. As a result, during the analysis, we can focus on the concentration of some fixed layer , which does not depend on the parameter (Lemma 6.1). The details are discussed in Section 6, Challenge 1.
Q2. How is the certified estimator and other notations defined?
A2. The certified estimator ensures that each layer's estimate is within a predefined confidence radius from the previous layer. is the regularization parameter from Algorithm 1, and is the confidence radius in Theorem 5.1.
Q3. Why is the condition different between Theorem 5.1 and Lemma C.12?
A3. Theorem 5.1 integrates Lemmas C.12 and C.14, which hold with probabilities of and , respectively. Applying the union bound, Theorem 5.1 holds with , differing from Lemma C.12. We will clarify this in the revision.
Q4. Could you clarify the technical details of getting rid of in Lemma C.14 derived using ?
A4. is applied in Section D.10, Line 763 after getting Taking , , and , inequality (1) yields
According to the fact that , letting and , (2) becomes Hiding the logarithmic factors within the notation yields the claimed result in Lemma C.14. We will clarify the technical details of this part in the revision.
Q5. What’s the connection between the UniSOFT assumption and the result obtained in this paper?
A5. We would like to highlight the connection between the UniSOFT assumption and the result obtained in this paper, which we also discussed in Remark 5.2. First, Papini et al. showed that UniSOFT is necessary for obtaining an expected constant regret, suggesting that a expected regret is unavoidable without UniSOFT. In our result, we proved a high-probability constant regret, suggesting that with probability at least , the regret is bounded by a constant .
Our result does not violate Papini et al. When translating the high-probability regret bound to an expected regret bound, we have . Setting will lead to a logarithmic expected regret bound. In short, a high-probability constant regret bound results in a logarithmic expected regret bound.
On the other hand, the constant regret upper bound in Papini et al. is obtained under the UniSOFT assumption. Ours, in contrast, obtain the same high-probability constant regret upper bound without the UniSOFT, which is the first constant regret bound without prior assumptions in the literature.
In conclusion, the UniSOFT assumption is a necessary condition for expected constant regret but not a necessary condition for high-probability constant regret. Our result is the first work showing the high-probability constant regret bound without this assumption
Q6. What’s the experimental result and the intuitive understanding of the constant regret?
A6. We have added some experiments on the synthetic dataset to verify the performance of the algorithm, which is presented in the global response. In addition to the experimental results, the understanding of constant regret is quite intuitive: A constant regret bound suggests that the total mistakes made by the RL agent are finite, with high probability. Taking the two-armed stochastic bandit as an example, if the gap between the two arms is strictly positive, it only takes finite time to distinguish the optimal arm from the suboptimal arm. Thus, the total mistakes made by the RL agent are finite.
- Vial, Daniel, et al. "Improved algorithms for misspecified linear markov decision processes." AISTATS, 2022
- Papini, Matteo, et al. "Reinforcement learning in linear mdps: Constant regret and representation selection." NeurIPS, 2021
This paper gives a constant regret (total regret independent of the number of episodes ) algorithm for online reinforcement learning of linear MDPs. The environment is considered to be a -approxinate linear MDP as in (Jin et al 2019), with an assumption that the misspecification level () is not too large. The algorithm is a more sophisticated variant of the original LSVI-UCB, which is a further improvement of earlier work that improves the regret of LSVI-UCB (but does not achieve constant regret).
The regret is parametrized by the instance-dependent minimal suboptimality gap , which also appears in earlier instance dependent regret bounds for linear MDPs (and also in general episodic MDPs).
Their algorithm, Cert-LSVI-UCB, has a regret bound of . The dependence on is optimal, whereas the dependence on and may be suboptimal.
This paper is also an improvement on the previous work that provides "constant regret" bounds for linear MDPs (Papini et al 2021) in that it does not require the "UniSoft" assumption on the feature mapping (only requires the standard assumptions of misspecified linear MDPs as in Jin et al 2019, plus that the misspecification level ).
优点
- The algorithm and its analysis gives a substantial improvement over earlier algorithms with instance-dependent bounds for linear MDPs, in that it is the first algorithm with a constant regret bound that requires minimal assumptions, even for misspecified linear MDPs.
- The algorithm is cleanly based on earlier linear MDP algorithms starting from LSVI-UCB, applied in more sophisticated ways (multiple regression phases per episode etc), and the improvements in the algorithm and challenges in the analysis have been highlighted in the main paper itself (especially, in avoiding the dependency of the regret on the number of phases).
缺点
- The contribution of the paper is entirely theoretical, but most of the proof is in the appendix (which is unavoidable).
- Some experimental results (even on synthetic data) would definitely enhance the paper.
问题
NA
局限性
- The limitations have been adequately addressed and there is no negative societal impact.
Thank you for the positive feedback on our work! We will address your questions as follows:
Q1. Adding some experimental results in addition to the theoretical result
A1. Thank you for the suggestion. We have included some experimental results on a synthetic dataset during the rebuttal phase. Specifically, these results are provided in our response to all reviewers. The experimental results indicate the potential for achieving constant regret and highlight the contributions of each algorithm component through ablation studies. We will include this information in the camera-ready revision.
The paper presents improved regret bounds for Linear MDPs when the suboptimality gap \Delta is known. Most crucially (a bit surprisingly), the regret bounds are constant in a number of episodes K
优点
The contributions are broadly theoretical, and the removal of dependence on the number of episodes K seems to be a good contribution to the academic understanding of the gap-dependent (instance-dependent)learning bounds for reinforcement learning.
The paper provides a good coverage of related works in this area and describes the setting well.
缺点
- b_h^\pi seems to be undefined in proposition 3.2
- \kapp_\ell in Algorithm 1, line 8, as claimed by the author, helps to get rid of log(K). Can you explain more about how it helps to remove log(K)? is this the reason why the regret bounds do not have log(K) dependence? 3)I would urge the authors to provide a full expression of regret without O notation, including the log factors either in the main text or in the appendix (I could not find a full expression for the result in theorem 5.1). Specifically, it is surprising, perhaps I am missing something, that the regret is O(1) when Delta = 1/(d^3H^5) when K=1. I suspect some lower-order terms are hidden, or maybe the result is only valid for K \geq something. The asymptotic dependence can be right still.
问题
Please see above, I am willing to engage during the rebuttal phase.
局限性
NA
Thank you for your positive feedback. We address your questions as follows.
Q1. What's the meaning of notation in Proposition 3.2?
A1. Sorry for the typo caused by the LaTeX macros. The should be instead.
Q2. How the eliminates the dependence of .
A2. We discussed the contribution of applying layer-wise quantization in Section 6.1, Line 268. Previous work (Vial et al., 2022) used a global quantization . Therefore, the quantity of all possible quantized parameters and in each layer unnecessarily depends on the total number of episodes . In our algorithm, each layer uses a different quantization parameter so the quantity of the quantized parameters and only depends on (Lemma D.3).
In addition to the benefit from layer-wise quantization, another technique to eliminate the dependence is the certified estimator discussed in Section 6. Without the certified estimator, one needs to cover the parameters for all layers. The certified estimator builds the connection between the output value function between consecutive layer and . As a result, in the analysis, we can focus on the concentration of some fixed layer , which does not depend on the parameter (Lemma 6.1). The details are discussed in Section 6, Challenge 1.
Besides the aforementioned challenges and techniques to remove the logarithmic dependence in the estimation error, we also provide a novel concentration analysis in Section 6.2 that achieves constant regret. In sharp contrast, previous analysis (He et al., 2022) has the logarithmic dependence for this concentration.
Q3. Why the regret is when ? What is the full expression of regret without notations?
A3. We would like to first clarify that the extreme case with regret you mentioned is incorrect. The regret is in the form of . Therefore, to achieve , one would need to set instead of . Since the reward is bounded by , the gap is trivially bounded by due to . Therefore, letting is unrealizable in this setting.
In response to your request for the regret bound without notation, the full expression of the regret bound can be obtained through Lemmas C.12, C.13, and C.14. We present the full expression as follows:
&\textrm{Regret}(K) \le \left(8I_{\Delta} + 24H^2\log(1 / \delta)\right)\log\left(4I_{\Delta} + 12H^2\log(1 / \delta)\right) + 12H^2,&I_{\Delta} =4I_{\text{C.13}}\log\left(2I_{\text{C.13}}\right),
> $$ $ &I_{\text{C.13}} = c_1 L_{\Delta}(L_{\Delta}+\log(dH))^2 d^3 H^5 \Delta^{-1} \log(L_{\Delta} d/\delta) + c_2 H^3 \Delta^{-1} \log (H\log(\Delta^{-1})/\delta), &L_{\Delta} = \lceil -\log(0.01 \Delta / H) \rceil. $The dominating term is . In the paper we use the notation for readability of the proof due to the complexity of this expression.
- Vial, Daniel, et al. "Improved algorithms for misspecified linear markov decision processes." AISTATS, 2022
Thank you for your detailed response. I am broadly satisfied. My only recommendation (for an updated version) is that authors focus a bit on explaining the key difference that allows them to have regret and not depend on , as they have mentioned some of the points in the rebuttal here.
Thank you for your continuous support for our paper. As you suggested, we will expand the discussions on the technical challenges in Section 6 in the revision.
We first thank all reviewers for their valuable feedback.
In response to Reviewers 8jf2, 33Lb, and YEKy, we added experiments on synthetic datasets to verify the performance of the algorithm and the contribution of each component. Specifically, we consider a linear MDP with , , , and . Each element in the feature vector and is generated by a uniform distribution . Subsequently, is normalized to ensure that is a probability measure, i.e., . The reward is defined by , where .
We also consider adding model misspecification to the transition and reward function . For a given misspecification , the ground truth reward function is defined by , where . When adding the model misspecification to the transition kernel, we first random sample a subset such that . Then the misspecified transition kernel is then generated by
- For ,
- For ,
We can verify that .
We investigated the misspecification level from in 16 randomly generated environments over 2000 episodes. We report the cumulative regret and runtime with respect to different misspecification levels. Additionally, we performed an ablation study by
- removing the certified estimation (Algorithm 2, Line 11) and
- removing the quantization (Algorithm 1, Line 8).
The results of these configurations are presented in the following table. The detailed regret for all misspecification level is presented in Table 1 in the attached PDF. In Figure 1 of the attached PDF, we plot the cumulative regret for 2000 episodes with respect to the misspecification level $\zeta. The cumulative regret curve is plotted in Figure 2 of attached PDF.
| Cert.Est. | Quant. | 0 | 0.05 | 0.1 | 0.15 | 0.2 | 0.25 | 0.3 | Running Time |
|---|---|---|---|---|---|---|---|---|---|
| 754.84 221.06 | 868.10 278.24 | 1160.24 436.41 | 1583.37 690.67 | 2160.15 794.80 | 2644.05 823.24 | 3206.85 825.13 | 1654.76 | ||
| 755.21 220.71 | 868.25 278.19 | 1159.56 436.90 | 1583.99 690.68 | 2160.03 795.28 | 2643.40 823.50 | 3206.92 824.89 | 1654.21 | ||
| 758.26 220.62 | 845.62 297.89 | 1015.95 321.12 | 1154.01 340.98 | 1290.21 380.01 | 1433.30 544.80 | 1438.27 471.33 | 1599.66 | ||
| 758.63 220.23 | 845.80 297.92 | 1015.41 321.57 | 1154.45 341.32 | 1289.96 380.03 | 1434.09 544.40 | 1438.55 471.24 | 1593.38 |
The experimental results suggest several key findings that support our theoretical analysis:
- When the misspecification level is low, it is possible to achieve constant regret, where the instantaneous regret in the final rounds is approximately zero.
- The certified estimator and the quantization do not significantly affect the algorithm's runtime. In contrast, the certified estimator provides an ‘early-stopping’ condition in Algorithm 2, which slightly reduces the algorithm's runtime. In particular, our algorithm yields a computational complexity of , which is the same as (Vial et al., 2022) and only greater than the vanilla LSVI-UCB (Jin et al., 2020) due to the multi-phased algorithm.
- The certified estimator helps the algorithm by providing robust estimation in the presence of misspecification. As shown in the table, using the certified estimator does not make a significant difference when the misspecification level is low, but it becomes significant as the misspecification level increases.
- The quantization does not contribute significantly to the results, as the numerical results are intrinsically discrete and quantized. In Figure 2 of the attached PDF, the regret curve with quantization and the one without quantization are highly overlapped.
We hope these results help clarify the theoretical insights of our work, and we will include this in the revision.
- Jin, Chi, et al. "Provably efficient reinforcement learning with linear function approximation." COLT, 2020
- Vial, Daniel, et al. "Improved algorithms for misspecified linear markov decision processes." AISTATS, 2022
This papers studies the linear MDP setting and achieves a high probability regret bound that is independent of the number of episodes, while past work had a log(K) dependence. An expected regret bound necessarily has a log(K) dependence, so the novelty holds when the high probability bound holds with probability 1-delta where delta is independent of K. Achieving this result required a careful union bound and analysis of a novel algorithm.
Given that such results hold for bandits, this result is not particularly surprising but nevertheless was unknown. Reviewers were skeptical of the practical value of the result given that the log(K) is necessary for an expected regret bound, and the questionable dependence on the more relevant parameters like the dimension d and horizon H. Indeed, there are many papers that suggest no greater than a d^2 H^3 dependence (versus d^3 H^5) is optimal (e.g., Zannette, Lazaric et al 2020). The authors' defensive reaction to this well-justified line of inquiry was a rebuttal misstep. We suggest the authors more suitably address this dependence in the paper and comment on ways to improve the algorithm and/or analysis. Nevertheless, the techniques and technical content may warrant a valuable contribution.