PaperHub
7.0
/10
Poster4 位审稿人
最低7最高7标准差0.0
7
7
7
7
2.5
置信度
正确性3.3
贡献度3.3
表达3.3
NeurIPS 2024

On Tractable $\Phi$-Equilibria in Non-Concave Games

OpenReviewPDF
提交: 2024-05-16更新: 2024-11-06
TL;DR

We initiate the study of *tractable* $\Phi$-equilibria in non-concave games and examine several natural families of strategy modifications.

摘要

While Online Gradient Descent and other no-regret learning procedures are known to efficiently converge to a coarse correlated equilibrium in games where each agent's utility is concave in their own strategy, this is not the case when utilities are non-concave -- a common scenario in machine learning applications involving strategies parameterized by deep neural networks, or when agents' utilities are computed by neural networks, or both. Non-concave games introduce significant game-theoretic and optimization challenges: (i) Nash equilibria may not exist; (ii) local Nash equilibria, though they exist, are intractable; and (iii) mixed Nash, correlated, and coarse correlated equilibria generally have infinite support and are intractable. To sidestep these challenges, we revisit the classical solution concept of $\Phi$-equilibria introduced by Greenwald and Jafari [GJ03], which is guaranteed to exist for an arbitrary set of strategy modifications $\Phi$ even in non-concave games [SL07]. However, the tractability of $\Phi$-equilibria in such games remains elusive. In this paper, we initiate the study of tractable $\Phi$-equilibria in non-concave games and examine several natural families of strategy modifications. We show that when $\Phi$ is finite, there exists an efficient uncoupled learning algorithm that approximates the corresponding $\Phi$-equilibria. Additionally, we explore cases where $\Phi$ is infinite but consists of local modifications, showing that Online Gradient Descent can efficiently approximate $\Phi$-equilibria in non-trivial regimes.
关键词
Non-Concave Games$\Phi$-Equilibrium$\Phi$-Regret MinimizationLearning in Games

评审与讨论

审稿意见
7

The paper presents new results about no-Φ\Phi-regret learning when the utilities are not concave. When Φ\Phi is either finite or in a precise sense "local", it is shown that no-Φ\Phi-regret learning is possible, and hence that the corresponding notions of Φ\Phi-equilibria can be efficiently approximated at rate poly(1/ϵ)\text{poly}(1/\epsilon).

优点

This presents some interesting and novel results in a regime (non-concave games) where positive results seem to be relatively difficult to come by. The paper was also very well written and easy to read. I vote to accept and only have some minor comments.

缺点

It was useful to me to keep in mind that the regime δ=O(ϵ)\delta = O(\epsilon) (where OO hides game/instance-dependent constants) is trivial, because no δ\delta-local change can affect a Lipschitz utility function by more than O(δ)O(\delta). It would be nice to explicitly state this somewhere.

In Table 1, it would be nice to distinguish the contributions of this paper from past contributions. Perhaps for each cell, include either the past citation or the theorem number in the current paper.

Minor things and typos (not affecting score):

  • Table 1: effifient -> efficient
  • Missing close-parens: Algorithm 1, line 5; two in the botton half of page 18
  • Algorithm 2, line 3: form -> from

问题

  1. Could you elaborate on Corollary 1? It is not obvious to me how that bound follows from Theorem 2.
  2. I find it interesting that the algorithms in this paper are presented as randomized algorithm outputting a single xtx^t rather than a determinisitic algorithm outputting a distribution μtΔ(X)\mu^t \in \Delta(\mathcal X) (as done by Peng and Rubinstein [2024], Dagan et al [2024], and Zhang et al [2024]). At least with your techniques, this difference seems to be an unavoidable consequence of the fact that the utilities utu^t are nonlinear, and therefore one cannot say Eϕput(ϕ())=ut(Eϕpϕ())\mathbb E_{\phi \sim p} u^t(\phi(\cdot)) = u^t(\mathbb E_{\phi \sim p} \phi(\cdot)). Do I understand this correctly?
  3. If I understood the previous point correctly: in Theorem 5, the nonlinearity of utu^t is a nonissue by assumption. So, is it possible to de-randomize that result by instead deterministically outputting the uniform distribution μt:=unifx1,,xK\mu^t := \text{unif} \\{ x_1, \dots, x_K \\}?
  4. Is there an efficient algorithm for ΦBeamX(δ)\Phi^\mathcal{X}_\text{Beam}(\delta)-regret minimization?
  5. Do the results of this paper have any novel implications for interesting special cases, such as nonconvex-concave zero-sum games, or simply the classical setting of concave no-regret learning? For example the results of Section 4 seem to have interesting (new?) implications for local (mixed) Nash equilibria in nonconvex-nonconcave zero-sum games.

Papers cited here but not in the submission:

  • Dagan, Y., Daskalakis, C., Fishelson, M., & Golowich, N. (STOC 2024). From External to Swap Regret 2.0: An Efficient Reduction for Large Action Spaces.
  • Peng, B., & Rubinstein, A. (STOC 2024). Fast swap regret minimization and applications to approximate correlated equilibria.

局限性

yes

作者回复

Thanks for your positive review and constructive comments! We will add a note on the trivial regime where δ=O(ϵ)\delta = O(\epsilon). We will update Table 1 to distinguish our contributions better. Below, we address your questions.

Q: Could you elaborate on Corollary 1? It is not obvious to me how that bound follows from Theorem 2.

A: We will add the proof of Corollary 1 in the revised version of the paper. In Corollary 1, we consider the case where Φ\Phi is the class of MM-Lipschitz functions from [0,1]d[0,1]^d to [0,1]d[0,1]^d and the utility is GG-Lipschitz. The α\alpha-covering number of Φ\Phi satisfies logN(α)=Θ((1/α)d)\log N(\alpha) = \Theta((1/\alpha)^d) (this fact is also used in [1, Example 23]). Thus, we can run Algorithm 1 over the finite α\alpha-cover of Φ\Phi. This leads to a regret bound of T(M/α)d)+αGT\sqrt{T(M/\alpha)^d)} + \alpha G T, where the second term is due to the covering error. By choosing α=MT1/(d+2)\alpha = M T^{-1/(d+2)}, we get regret bound O(cT(d+1)/(d+2))O(c \cdot T^{(d+1)/(d+2)}) where c is some constant that only depends on GG and MM.

[1] Stoltz, Gilles, and Gábor Lugosi. "Learning correlated equilibria in games with compact sets of strategies." Games and Economic Behavior, 2007

Q: I find it interesting that the algorithms in this paper are presented as randomized algorithm outputting a single xtx^t rather than a deterministic algorithm outputting a distribution μtΔ(X)\mu_t \in \Delta(\mathcal{X}) (as done by Peng and Rubinstein [2024], Dagan et al [2024], and Zhang et al [2024]). At least with your techniques, this difference seems to be an unavoidable consequence of the fact that the utilities ut are nonlinear, and therefore one cannot say Eϕp[ut(ϕ())]=ut(E_ϕpϕ())\mathbb{E}_{\phi \sim p}[u^t(\phi(\cdot))] = u^t( \mathbb{E}\_{\phi \sim p} \phi(\cdot)). Do I understand this correctly?

A: Two issues prevent us from outputting a distribution. The first one, as you pointed out, is the nonlinearity of the utility function. The second is that the distribution we constructed has an exponentially large support of |\Phi|^\sqrt{T}. To achieve an ε\varepsilon-approximate Φ\Phi-equilibrium, we need TT to be poly(1/ε)\textnormal{poly}(1/\varepsilon), so outputting the entire distribution requires time exponential in 1/ε1/\varepsilon. Instead, using a sampling procedure allows us to significantly improve the dependence on 1/ε1/\varepsilon and obtain an algorithm whose running time is only polynomial in 1/ε1/\varepsilon.

Q: If I understood the previous point correctly: in Theorem 5, the nonlinearity of ut is a nonissue by assumption. So, is it possible to de-randomize that result by instead deterministically outputting the uniform distribution μt=x1,,xK\mu^t = \\{x_1, \ldots, x_K\\}?

A: We can derandomize this result by directly outputting the distribution μt\mu^t. In this case, we allow the algorithm to output a mixed strategy and consider regret over the expected utility in each iteration. Thank you for this insightful observation.

Q: Is there an efficient algorithm for ΦX_Beam(δ)\Phi^{\mathcal{X}}\_{\textnormal{Beam}}(\delta)-regret minimization?

A: We currently don’t have an efficient algorithm and believe this is an interesting open question. We introduce ΦX_Beam(δ)\Phi^{\mathcal{X}}\_{\textnormal{Beam}}(\delta)-regret to show that even for simple local strategy modification sets Φ(δ)\Phi(\delta), the landscape of efficient local Φ(δ)\Phi(\delta)-regret minimization is already quite rich, and many basic and interesting questions remain open.

Q: Do the results of this paper have any novel implications for interesting special cases, such as nonconvex-concave zero-sum games, or simply the classical setting of concave no-regret learning? For example the results of Section 4 seem to have interesting (new?) implications for local (mixed) Nash equilibria in nonconvex-nonconcave zero-sum games.

A: To the best of our knowledge, the notion of Φproj\Phi_{\textnormal{proj}}-regret is new, and its efficient minimization is unknown even in the classical setting of no-regret learning with concave utilities. We think our upper and lower bound results for the Φproj\Phi_{\textnormal{proj}}-regret are, therefore, interesting even in this setting. We do not see any immediate implications for local mixed Nash equilibria in nonconvex-nonconcave zero-sum games, but this is a very interesting open question.

评论

Thank you. My opinion of the paper was and remains positive, and I will keep my score.

审稿意见
7

This paper describes a concept of Φ\Phi-equilibrium in non-concave games, which is a rarely discussed notion in the recent literature. This notion of equilibrium is defined over a set of strategy modifications, with specific definitions of Φ\Phi, Φ\Phi-equilibrium can recover commonly known notions such as CCEs, though this paper mostly focuses on more limiting choices of Φ\Phi. The paper shows that there exists efficient algorithms to find an ϵ\epsilon-approximate Φ\Phi-equilibrium in tractable time for some specific families of set Φ\Phi.

优点

Although some assumptions on families of set Φ\Phi seem restrictive, it is known that without any assumptions, the task of finding e.g. CCE, even approximately and locally, is intractable. This paper provides a solid direction towards better understanding of algorithms in non-convex games.

This paper is quite self-contained, and section 1 explained the problem formulation such that the reviewer find easy to follow.

For the infinitely large Φ\Phi, the three families offered good insight and can potentially cover many existing strategy modification schemes in application.

缺点

The structure of this paper is questionable. Section 1 alone takes 4 pages, and the remainder of this paper seems rushed and/or crammed. The authors did address the space constraint of this paper, but I believe the paper can be greatly improved in terms of better readability. For instance, I believe it would be beneficial to address algorithm 1 in the main paper. And more space could be assigned to the finite Φ\Phi setting, as a warm-up. The reviewer suggests that some paragraphs or even theorems in section 4 could be moved to the appendix.

Some statements throughout the paper appear to be slightly repetitive and reiterates of previous sections, many statements can be shortened and more concise. For example, section 1.1 interweaves between introduction and contributions.

Although this paper is quite high-level and technical, it would be reasonable to consider some empirical results to validate and strengthen the theoretical results such as the realistic complexity for the sampling procedure. This can perhaps be added in the appendix.

问题

No

局限性

The authors adequately address the limitations of this paper.

作者回复

Thank you for the positive review and constructive comments on the structure and presentation of the paper!

We will incorporate your suggestions and improve the presentation in the revised version of the paper. Since one more page is allowed in the camera-ready version, we will assign more space to the finite Φ\Phi setting and include Algorithms 1 and 2 and relevant discussions in the main paper. We will also adjust the structure of section 4 and make all statements in the main paper more concise.

Complexity of our algorithms: The algorithms we study (except for Algorithms 1 and 3) are gradient-based algorithms like gradient descent and optimistic gradient descent, which have efficient practical implementations. Regarding the complexity of the sampling procedure (Algorithm 2), the per-iteration complexity is exactly TΦ\sqrt{T} \cdot |\Phi|, where we need to sample from a distribution that supports on Φ|\Phi| points, evaluate the strategy modification function ϕ\phi, and repeat the above procedure for T\sqrt{T} steps. The above complexity analysis for Algorithm 2 is tight. There is an equivalent implementation that improves the expected running time by a factor of 2: we could first sample a number N uniformly between [1,T][1, \sqrt{T}] and then only run the sampling procedure for NN steps rather than T\sqrt{T} steps, and finally return the last point. This improves the expected per-iteration complexity to (TΦ)/2(\sqrt{T} \cdot |\Phi|) / 2. We will add the discussion in the revised version of the paper.

评论

Thank you for the comments, my concerns have been addressed. Given that the issues will be addressed in the revised manuscript, while also referencing the discussion between the author and other reviewers, I have decided to increase my score.

审稿意见
7

This work studies the Φ\Phi-equlibrium in non-concave games and discuss when the Φ\Phi-equlibrium of a non-concave game can be learned in polynomial time. The theoretical results presented in this paper indicate that if the set Φ\Phi is finite, then there exists an efficient uncoupled algorithm that converges to the corresponding Φ\Phi-equilibria. Moreover, this work also considers the case when Φ\Phi is not finite but consist of local modifications.

优点

Significance: This paper gives a non-trivial results on the Φ\Phi-equilibrium of non-concave games. It provides a simple condition to tell if Φ\Phi-equilbria can be learned in polynomial iterations.

Originality: There are no existing literature that have resolved the Φ\Phi-equilibrium problem. So, the result of this paper is new. This paper extends existing framework to consider infinite local strategy modifications, which also leads to new technical novelty.

Clarity: This work includes sufficient backgroun introduction and put adequate materials in the appendix for references. I feel easy to understand the main contribution of this work.

缺点

I believe this is a high quality paper and I give a clear accept. But I do have some confused points. I put them in the next sections.

问题

  1. I find some papers showing that for Markov games, the CE or CCE can be learned in polynomial time. For example: Jin, Chi, et al. "V-Learning--A Simple, Efficient, Decentralized Algorithm for Multiagent RL." arXiv preprint arXiv:2110.14555 (2021). From my understanding, when setting their horizon H=1H=1, the Markov game seems to be a classical game. Is there any difference between this setting and yours?

  2. Is there any applications of Φ\Phi-equilibrium? For example, when do we need to efficiently solve some Φ\Phi-equilibrium?

  3. The definition of Φ\Phi-equilibrium looks really similar to the Correlated Equilibrium (CE) in your Definition 6. Is CE as the same as Φ\Phi-equilibrium if Φ\Phi is choosen as the whole space?

局限性

This is a purely theoretical work so there is no any negative impact.

作者回复

Thank you for your very positive review. Below, we address your questions.

Q: I found some papers showing that for Markov games, the CE or CCE can be learned in polynomial time. For example Jin, Chi, et al. "V-Learning--A Simple, Efficient, Decentralized Algorithm for Multiagent RL." arXiv preprint arXiv:2110.14555 (2021). From my understanding, when setting their horizon H = 1, the Markov game seems to be a classical game. Is there any difference between this setting and yours?

A: If the horizon H=1H = 1, then a general-sum Markov game becomes a general-sum normal-form game where each player’s utility function is linear in their own strategy. This is a special case of the non-concave games we consider in this paper since, in a non-concave game, each player’s utility could be non-concave in their own strategy. Additionally, Markov games with H1H \ge 1 are special cases of non-concave games that include additional structure, such as the Markovian property. Since our focus is on the more general class of non-concave games, these specific results do not apply.

Q: Is there any applications of Φ\Phi-equilibrium? For example, when do we need to efficiently solve some Φ\Phi-equilibrium?

A: The notion of Φ\Phi-equilibrium is very general. If Φ\Phi is all constant strategy modifications, the corresponding Φ\Phi-equilibrium coincides with the notion of coarse correlated equilibrium (CCE); If Φ\Phi is all strategy modifications, then the corresponding Φ\Phi-equilibrium coincides with the notion of correlated equilibrium (CE). CE and CCE are both central and classical notions extensively studied in the literature and used in practice. Generally, a Φ\Phi-equilibrium guarantees that no single agent has an incentive to unilaterally deviate by using strategy modifications from their set Φi\Phi_i. This provides a stability guarantee that we believe will be relevant for many economics and machine learning applications, including game AI and multi-agent reinforcement learning.

Q: The definition of Φ\Phi-equilibrium looks really similar to the Correlated Equilibrium (CE) in your Definition 6. Is CE as the same as Φ\Phi-equilibrium if Φ\Phi is chosen as the whole space?

A: Yes. If we choose Φ\Phi as all possible strategy modifications, the corresponding Φ\Phi-equilibrium is exactly CE.

评论

Thanks for your response. I have read the rebuttal and I will keep my current positive score.

审稿意见
7

The paper studies nonconcave games and Φ\Phi-equilibrium. When Φ\Phi is finite, the paper showed that there exists an uncoupled learning algorithm that can efficiently find the equilibria. Under certain classes of strategy modification, online gradient descent can also approximate Φ\Phi-equilibria when Φ\Phi is infinite.

优点

I am not an expert in non-concave games and Φ\Phi equilibria. Thus I suggest referring to other reviewer for the strength and weaknesses

缺点

I am not an expert in non-concave games and Φ\Phi equilibria. Thus I suggest referring to other reviewer for the strength and weaknesses

问题

I am not an expert in non-concave games and Φ\Phi equilibria. Thus I suggest referring to other reviewer for the strength and weaknesses

局限性

yes

最终决定

The paper focuses on no-Φ\Phi-regret learning when the utilities of the agents are not concave. It is shown that when Φ\Phi is either finite or in a precise sense "local" (local deviations are allowed), no-Φ\Phi-regret learning is possible and therefore the corresponding notions of Φ\Phi-equilibria can be efficiently approximated in time poly( 1/ϵ1/\epsilon ). On the negative side, the authors did not discuss the suboptimality aspects of the equilibrium notion considered (most notably, that even a global utility minimizer can be a Φ\Phi-equilibrium in the range of ϵ\epsilon-δ\delta paramters considered by the authors) but, nonetheless, the reviewers were positive about this paper and the AC recommends acceptance.