Mingyu Gao Stanford University
时间： 2018-04-12 14:00-2018-04-12 15:00
Big data applications such as deep learning and graph analytics process massive data within rigorous time constraints. For such data-intensive workloads, the frequent and expensive data movement between memory and compute modules dominates both execution time and energy consumption, seriously impeding performance scaling. Recent semiconductor 3D integration technologies allow us to avoid data movement by executing computations closer to the data. Nevertheless, realizing such near-data processing systems still faces critical architectural challenges, including efficient processing logic circuits, practical system architectures and programming models, and scalable parallelization and dataflow scheduling schemes.
I have proposed a coherent set of hardware and software solutions to enable efficient, practical, and scalable near-data processing systems for both general-purpose and specialized computing platforms. First, I will present an efficient hardware logic substrate, which uses dense memory arrays, such as DRAM and non-volatile memories, to build a bit-level, multi-context reconfigurable fabric with high density and low power consumption. Then, I will briefly describe a practical near-data processing architecture and runtime system. Finally, I will discuss the domain-specific parallelization schemes and dataflow optimizations that exploit different levels of parallelism in deep neural networks to improve the scalability. Overall, the presented techniques not only demonstrate order of magnitude improvements, but also represent practical large-scale system designs to realize such significant benefits.
Mingyu Gao is a PhD candidate in the Electrical Engineering Department at Stanford University. His research interests include energy-efficient computing and memory systems, specifically on practical and efficient near-data processing for data-intensive analytics applications, high-density and low-power reconfigurable architectures for datacenter services, and scalable accelerators for large-scale neural networks. He received an MS in electrical engineering from Stanford, and a BS in microelectronics from Tsinghua University in Beijing, China.