Login [Center] Logout Join Us Guidelines  I  中文  I  CQI

Fast swap regret minimization and applications to approximate correlated equilibria

Speaker: Columbia University
Time: 2024-06-20 15:00-2024-06-20 16:00
Venue: FIT 1-222


We give a simple and computationally efficient algorithm that, for any constant \eps>0, obtains \eps T distributional swap regret within only T = polylog(n) rounds; this is an exponential improvement compared to the super-linear number of rounds required by the state-of-the-art algorithm, and resolves the main open problem of [Blum-Mansour JMLR'07]. Our algorithm has an exponential dependence on \eps, but we prove a new, matching lower bound. 

Our algorithm for swap regret implies faster convergence to \eps-Correlated Equilibrium (\eps-CE) in several regimes: For normal form two-player games with n actions, it implies the first uncoupled dynamics that converges to the set of \eps-CE in polylogarithmic rounds; a polylog(n)-bit communication protocol for \eps-CE in two-player games (resolving an open problem mentioned by [Babichenko-Rubinstein STOC'2017, Ganor-CS APPROX'18, Goos-Rubinstein FOCS'2018}; and an $O(n)$-query algorithm for \eps-CE (resolving an open problem of [Babichenko 2020] and obtaining the first separation between \eps-CE and \eps-Nash equilibrium in the query complexity model). For extensive-form games, our algorithm implies a PTAS for normal form correlated equilibria, a solution concept widely conjectured to be computationally intractable (e.g. [Stengel-Forges MOR'08, Fujii'23]). 

Based on joint work with Aviad Rubinstein

Short Bio:

Binghui Peng recently obtains his Ph.D. from Columbia University, advised by Christos Papadimitriou and Xi Chen. Previously, he studied Computer Science with the Yao Class in Tsinghua. He studies the theory of computation and develops algorithms and complexity theory for machine learning, artificial intelligence and game theory. His research works have addressed long-standing questions in learning theory and game theory, and his research papers were published in theory conferences (STOC/FOCS/SODA; in the latter he has best student paper award) and ML conferences (NeurIPS/ICLR/ACL).