Login [Center] Logout Join Us Guidelines  I  中文  I  CQI

ELEVEN Papers by IIIS Students Accepted to ICLR 2020

hits:
January 09,2020

Recently, a list of accepted papers in International Conference on Learning Representations (ICLR) 2020 was announced, and among those eleven papers came from IIIS undergraduate and graduate students. Their work described in the eleven papers explored many aspects of deep learning in generalization bounds for gradient-based algorithms, implicit bias of gradient descent, influence-based multi-agent, generalization of two-layer neural networks, Q-learning with UCB exploration, and other relevant topics.

ICLR is a globally renowned conference for presenting and publishing cutting-edge research on all aspects of deep learning used in the fields of artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, text understanding, gaming, and robotics. ICLR 2020 received 2594 submissions, and 48 papers are selected as talks, 107 papers as spotlight presentations, and 532 papers as posters. One paper first-authored by IIIS Phd candidate Kaifeng Lyu is recognized as a talk, and two papers first-authored by IIIS graduate students Tonghan Wang, Jianhao Wang and IIIS undergraduate Tianzong Zhang as spotlight presentations.

 

The eleven accepted papers are:

  1. Gradient Descent Maximizes the Margin of Homogeneous Neural Networks. Kaifeng Lyu, Jian Li
  2. Influence-Based Multi-Agent Exploration. Tonghan Wang*, Jianhao Wang*,Yi Wu, Chongjie Zhang
  3. Generalization of Two-Layer Neural Networks: An Asymptotic Viewpoint. Jimmy Ba,Murat Erdogdu, Taiji Suzuki, Denny Wu, Tianzong Zhang
  4. Q-learning with UCB Exploration is Sample Efficient for Infinite-Horizon MDP. Yuanhao Wang, Kefan Dong, Xiaoyu Chen, Liwei Wang
  5. Distributed Bandit Learning: Near-Optimal Regret with Efficient Communication. Yuanhao Wang, Jiachen Hu, Xiaoyu Chen, Liwei Wang
  6. On Solving Minimax Optimization Locally: A Follow-the-Ridge Approach. Yuanhao Wang, Guodong Zhang, Jimmy Ba
  7. Deep Audio Priors Emerge From Harmonic Convolutional Networks. Zhoutong Zhang, Yunyun Wang, Chuang Gan, Jiajun Wu, Joshua B. Tenenbaum, Antonio Torralba, WilliamT. Freeman
  8. Hyper-SAGNN: a self-attention based graph neural network for hypergraphs.Ruochi Zhang, Yuesong Zou, Jian Ma
  9. Learning Nearly Decomposable Value-Functions via Communication Minimization. Tonghan Wang*, Jianhao Wang*, Chongyi Zheng, Chongjie Zhang
  10. Episodic Reinforcement Learning with Associative Memory. Guangxiang Zhu*,Zichuan Lin*, Guangwen Yang, Chongjie Zhang
  11. On Generalization Error Bounds of Noisy Gradient Methods for Non-ConvexLearning. Jian Li, Xuanyuan Luo, Mingda Qiao

 

(By Yuying Chang)