Login [Center] Logout Join Us Guidelines  I  中文  I  CQI

Theoretical limitations of multi-layer Transformers

Speaker: Lijie Chen Berkeley EECS
Time: 2025-03-24 09:45-2025-03-24 12:15
Venue: 腾讯会议 (677-569-896)

Abstract:

Transformers, especially the decoder-only variants, are the backbone of most modern large language models; yet we do not have much understanding of their expressive power except for the simple $1$-layer case. Due to the difficulty of analyzing multi-layer models, all previous work relies on unproven complexity conjectures to show limitations for multi-layer Transformers. In this work, we prove the first unconditional lower bound against multi-layer decoder-only transformers. For any constant $L$, we prove that any $L$-layer decoder-only transformer needs a polynomial model dimension ($n^{\Omega(1)}$) to perform sequential composition of $L$ functions over an input of $n$ tokens.

As a consequence, our results give: (1) the first (unconditional) depth-size trade-off for multi-layer transformers, exhibiting that the $L$-step composition task is exponentially harder for $L$-layer models compared to $(L+1)$-layer ones; (2) an unconditional separation between encoder and decoder, exhibiting a hard task for decoders that can be solved by an exponentially shallower and smaller encoder; (3) a provable advantage of chain-of-thought, exhibiting a task that becomes exponentially easier with chain-of-thought. 

On the technical side, we propose the multi-party autoregressive communication model that captures the computation of a decoder-only Transformer. We also introduce a new proof technique that finds a certain indistinguishable decomposition of all possible inputs iteratively for proving lower bounds in this model. We believe our new communication model and proof technique will be helpful to further understand the computational power of transformers.

Based on joint work with Binghui Peng and Hongxun Wu

Short Bio:

https://chen-lijie.github.io/