演讲人:
Zhiyuan Li Princeton University
时间: 2019-12-18 14:00-2019-12-18 15:00
地点:Block D 15th floor, Science & Technology Mansion, Tsinghua Science Park
内容:
Intriguing empirical evidence exists that deep learning can work well with exotic schedules for varying the learning rate. This paper suggests that the phenomenon may be due to Batch Normalization or BN, which is ubiquitous and provides benefits in optimization and generalization across all standard architectures. The following new results are shown about BN with weight decay and momentum (in other words, the typical use case which was not considered in earlier theoretical analyses of stand-alone BN. Training can be done using SGD with momentum and an exponentially increasing learning rate schedule, i.e., learning rate increases by some (1 + α) factor in every epoch for some α > 0. To the best of our knowledge this is the first time such a rate schedule has been successfully used, let alone for highly successful architectures. As expected, such training rapidly blows up network weights, but the net stays well-behaved due to normalization. Mathematical explanation of the success of the above rate schedule: a rigorous proof that it is equivalent to the standard setting of BN + SGD + Standard Rate Tuning + Weight Decay + Momentum. This equivalence holds for other normalization layers as well, Group Normalization, Layer Normalization, Instance Norm, etc. A worked-out toy example illustrating the above linkage of hyperparameters. Using either weight decay or BN alone reaches global minimum, but convergence fails when both are used.
个人简介:
Zhiyuan Li is a third year PhD candidate at Princeton University in Computer Science Department. He previously got his bachelor's degree at IIIS, Tsinghua University in 2017. His research interests includes theoretical machine learning, deep learning theory and non-convex optimization. He’s currently working on optimization and generalization of ultra wide neural networks and theoretical analysis for the complicated interplay between bunch of tricks in deep learning.